00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 92 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3270 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.196 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.289 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.300 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.311 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.311 > git config core.sparsecheckout # timeout=10 00:00:05.326 > git read-tree -mu HEAD # timeout=10 00:00:05.348 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.370 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.370 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.448 [Pipeline] Start of Pipeline 00:00:05.459 [Pipeline] library 00:00:05.460 Loading library shm_lib@master 00:00:05.460 Library shm_lib@master is cached. Copying from home. 00:00:05.473 [Pipeline] node 00:00:05.479 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.481 [Pipeline] { 00:00:05.489 [Pipeline] catchError 00:00:05.490 [Pipeline] { 00:00:05.500 [Pipeline] wrap 00:00:05.507 [Pipeline] { 00:00:05.514 [Pipeline] stage 00:00:05.515 [Pipeline] { (Prologue) 00:00:05.531 [Pipeline] echo 00:00:05.532 Node: VM-host-SM16 00:00:05.538 [Pipeline] cleanWs 00:00:05.545 [WS-CLEANUP] Deleting project workspace... 00:00:05.545 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.550 [WS-CLEANUP] done 00:00:05.724 [Pipeline] setCustomBuildProperty 00:00:05.806 [Pipeline] httpRequest 00:00:05.833 [Pipeline] echo 00:00:05.834 Sorcerer 10.211.164.101 is alive 00:00:05.841 [Pipeline] httpRequest 00:00:05.844 HttpMethod: GET 00:00:05.845 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.845 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.856 Response Code: HTTP/1.1 200 OK 00:00:05.857 Success: Status code 200 is in the accepted range: 200,404 00:00:05.857 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.322 [Pipeline] sh 00:00:08.602 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.620 [Pipeline] httpRequest 00:00:08.643 [Pipeline] echo 00:00:08.645 Sorcerer 10.211.164.101 is alive 00:00:08.655 [Pipeline] httpRequest 00:00:08.659 HttpMethod: GET 00:00:08.660 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:08.660 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:08.667 Response Code: HTTP/1.1 200 OK 00:00:08.668 Success: Status code 200 is in the accepted range: 200,404 00:00:08.668 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:03.484 [Pipeline] sh 00:01:03.762 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:06.295 [Pipeline] sh 00:01:06.574 + git -C spdk log --oneline -n5 00:01:06.574 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:06.574 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:06.574 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:06.574 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:06.574 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:06.591 [Pipeline] withCredentials 00:01:06.600 > git --version # timeout=10 00:01:06.641 > git --version # 'git version 2.39.2' 00:01:06.653 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:06.655 [Pipeline] { 00:01:06.663 [Pipeline] retry 00:01:06.664 [Pipeline] { 00:01:06.679 [Pipeline] sh 00:01:06.953 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:07.222 [Pipeline] } 00:01:07.246 [Pipeline] // retry 00:01:07.253 [Pipeline] } 00:01:07.275 [Pipeline] // withCredentials 00:01:07.287 [Pipeline] httpRequest 00:01:07.312 [Pipeline] echo 00:01:07.313 Sorcerer 10.211.164.101 is alive 00:01:07.322 [Pipeline] httpRequest 00:01:07.326 HttpMethod: GET 00:01:07.326 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.327 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.348 Response Code: HTTP/1.1 200 OK 00:01:07.349 Success: Status code 200 is in the accepted range: 200,404 00:01:07.349 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:22.578 [Pipeline] sh 00:01:22.856 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.772 [Pipeline] sh 00:01:25.055 + git -C dpdk log --oneline -n5 00:01:25.055 caf0f5d395 version: 22.11.4 00:01:25.055 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:25.055 dc9c799c7d vhost: fix missing spinlock unlock 00:01:25.055 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:25.055 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:25.076 [Pipeline] writeFile 00:01:25.093 [Pipeline] sh 00:01:25.375 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.389 [Pipeline] sh 00:01:25.670 + cat autorun-spdk.conf 00:01:25.670 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.670 SPDK_TEST_NVME=1 00:01:25.670 SPDK_TEST_FTL=1 00:01:25.670 SPDK_TEST_ISAL=1 00:01:25.670 SPDK_RUN_ASAN=1 00:01:25.670 SPDK_RUN_UBSAN=1 00:01:25.670 SPDK_TEST_XNVME=1 00:01:25.670 SPDK_TEST_NVME_FDP=1 00:01:25.670 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:25.670 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.670 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.675 RUN_NIGHTLY=1 00:01:25.677 [Pipeline] } 00:01:25.692 [Pipeline] // stage 00:01:25.710 [Pipeline] stage 00:01:25.711 [Pipeline] { (Run VM) 00:01:25.723 [Pipeline] sh 00:01:26.001 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.001 + echo 'Start stage prepare_nvme.sh' 00:01:26.001 Start stage prepare_nvme.sh 00:01:26.001 + [[ -n 5 ]] 00:01:26.001 + disk_prefix=ex5 00:01:26.001 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:26.001 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:26.001 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:26.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.001 ++ SPDK_TEST_NVME=1 00:01:26.001 ++ SPDK_TEST_FTL=1 00:01:26.001 ++ SPDK_TEST_ISAL=1 00:01:26.001 ++ SPDK_RUN_ASAN=1 00:01:26.001 ++ SPDK_RUN_UBSAN=1 00:01:26.001 ++ SPDK_TEST_XNVME=1 00:01:26.001 ++ SPDK_TEST_NVME_FDP=1 00:01:26.001 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:26.001 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:26.001 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.001 ++ RUN_NIGHTLY=1 00:01:26.001 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:26.001 + nvme_files=() 00:01:26.001 + declare -A nvme_files 00:01:26.001 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.001 + nvme_files['nvme.img']=5G 00:01:26.001 + nvme_files['nvme-cmb.img']=5G 00:01:26.001 + nvme_files['nvme-multi0.img']=4G 00:01:26.001 + nvme_files['nvme-multi1.img']=4G 00:01:26.001 + nvme_files['nvme-multi2.img']=4G 00:01:26.001 + nvme_files['nvme-openstack.img']=8G 00:01:26.001 + nvme_files['nvme-zns.img']=5G 00:01:26.001 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.001 + (( SPDK_TEST_FTL == 1 )) 00:01:26.001 + nvme_files["nvme-ftl.img"]=6G 00:01:26.001 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.001 + nvme_files["nvme-fdp.img"]=1G 00:01:26.001 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.001 + for nvme in "${!nvme_files[@]}" 00:01:26.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:26.001 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.001 + for nvme in "${!nvme_files[@]}" 00:01:26.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:26.001 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:26.001 + for nvme in "${!nvme_files[@]}" 00:01:26.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:26.259 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.259 + for nvme in "${!nvme_files[@]}" 00:01:26.259 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:26.259 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.259 + for nvme in "${!nvme_files[@]}" 00:01:26.259 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:26.259 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.259 + for nvme in "${!nvme_files[@]}" 00:01:26.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:26.260 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.260 + for nvme in "${!nvme_files[@]}" 00:01:26.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:26.260 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.260 + for nvme in "${!nvme_files[@]}" 00:01:26.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:26.260 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:26.260 + for nvme in "${!nvme_files[@]}" 00:01:26.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:27.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.193 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:27.193 + echo 'End stage prepare_nvme.sh' 00:01:27.193 End stage prepare_nvme.sh 00:01:27.203 [Pipeline] sh 00:01:27.480 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:27.480 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:27.480 00:01:27.480 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:27.480 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:27.480 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:27.480 HELP=0 00:01:27.480 DRY_RUN=0 00:01:27.480 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:27.480 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:27.480 NVME_AUTO_CREATE=0 00:01:27.480 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:27.480 NVME_CMB=,,,, 00:01:27.480 NVME_PMR=,,,, 00:01:27.480 NVME_ZNS=,,,, 00:01:27.480 NVME_MS=true,,,, 00:01:27.480 NVME_FDP=,,,on, 00:01:27.480 SPDK_VAGRANT_DISTRO=fedora38 00:01:27.480 SPDK_VAGRANT_VMCPU=10 00:01:27.480 SPDK_VAGRANT_VMRAM=12288 00:01:27.480 SPDK_VAGRANT_PROVIDER=libvirt 00:01:27.480 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:27.480 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:27.480 SPDK_OPENSTACK_NETWORK=0 00:01:27.480 VAGRANT_PACKAGE_BOX=0 00:01:27.480 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:27.480 FORCE_DISTRO=true 00:01:27.480 VAGRANT_BOX_VERSION= 00:01:27.480 EXTRA_VAGRANTFILES= 00:01:27.480 NIC_MODEL=e1000 00:01:27.480 00:01:27.480 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:01:27.480 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:31.677 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.934 ==> default: Creating image (snapshot of base box volume). 00:01:31.934 ==> default: Creating domain with the following settings... 00:01:31.934 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721048428_cb1a22aaca422846ae13 00:01:31.934 ==> default: -- Domain type: kvm 00:01:31.934 ==> default: -- Cpus: 10 00:01:31.934 ==> default: -- Feature: acpi 00:01:31.934 ==> default: -- Feature: apic 00:01:31.934 ==> default: -- Feature: pae 00:01:31.934 ==> default: -- Memory: 12288M 00:01:31.934 ==> default: -- Memory Backing: hugepages: 00:01:31.934 ==> default: -- Management MAC: 00:01:31.934 ==> default: -- Loader: 00:01:31.934 ==> default: -- Nvram: 00:01:31.934 ==> default: -- Base box: spdk/fedora38 00:01:31.934 ==> default: -- Storage pool: default 00:01:31.934 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721048428_cb1a22aaca422846ae13.img (20G) 00:01:31.934 ==> default: -- Volume Cache: default 00:01:31.934 ==> default: -- Kernel: 00:01:31.934 ==> default: -- Initrd: 00:01:31.934 ==> default: -- Graphics Type: vnc 00:01:31.934 ==> default: -- Graphics Port: -1 00:01:31.934 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.934 ==> default: -- Graphics Password: Not defined 00:01:32.194 ==> default: -- Video Type: cirrus 00:01:32.194 ==> default: -- Video VRAM: 9216 00:01:32.194 ==> default: -- Sound Type: 00:01:32.194 ==> default: -- Keymap: en-us 00:01:32.194 ==> default: -- TPM Path: 00:01:32.194 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:32.194 ==> default: -- Command line args: 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:32.194 ==> default: -> value=-drive, 00:01:32.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:32.194 ==> default: -> value=-device, 00:01:32.194 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.194 ==> default: Creating shared folders metadata... 00:01:32.194 ==> default: Starting domain. 00:01:34.092 ==> default: Waiting for domain to get an IP address... 00:01:52.156 ==> default: Waiting for SSH to become available... 00:01:52.156 ==> default: Configuring and enabling network interfaces... 00:01:56.333 default: SSH address: 192.168.121.229:22 00:01:56.333 default: SSH username: vagrant 00:01:56.333 default: SSH auth method: private key 00:01:58.230 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.410 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:11.676 ==> default: Mounting SSHFS shared folder... 00:02:13.050 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:13.050 ==> default: Checking Mount.. 00:02:14.424 ==> default: Folder Successfully Mounted! 00:02:14.424 ==> default: Running provisioner: file... 00:02:14.990 default: ~/.gitconfig => .gitconfig 00:02:15.555 00:02:15.555 SUCCESS! 00:02:15.555 00:02:15.555 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:15.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:15.555 00:02:15.563 [Pipeline] } 00:02:15.579 [Pipeline] // stage 00:02:15.587 [Pipeline] dir 00:02:15.587 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:02:15.588 [Pipeline] { 00:02:15.599 [Pipeline] catchError 00:02:15.601 [Pipeline] { 00:02:15.616 [Pipeline] sh 00:02:15.897 + vagrant ssh-config --host vagrant 00:02:15.897 + sed -ne /^Host/,$p 00:02:15.897 + tee ssh_conf 00:02:20.119 Host vagrant 00:02:20.119 HostName 192.168.121.229 00:02:20.119 User vagrant 00:02:20.119 Port 22 00:02:20.119 UserKnownHostsFile /dev/null 00:02:20.119 StrictHostKeyChecking no 00:02:20.119 PasswordAuthentication no 00:02:20.119 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:20.119 IdentitiesOnly yes 00:02:20.119 LogLevel FATAL 00:02:20.119 ForwardAgent yes 00:02:20.119 ForwardX11 yes 00:02:20.119 00:02:20.132 [Pipeline] withEnv 00:02:20.135 [Pipeline] { 00:02:20.149 [Pipeline] sh 00:02:20.423 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:20.423 source /etc/os-release 00:02:20.423 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.423 # Minimal, systemd-like check. 00:02:20.423 if [[ -e /.dockerenv ]]; then 00:02:20.423 # Clear garbage from the node's name: 00:02:20.423 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.423 # $HOSTNAME is the actual container id 00:02:20.423 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.423 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:20.423 # We can assume this is a mount from a host where container is running, 00:02:20.423 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.423 container="$(< /etc/hostname) ($agent)" 00:02:20.423 else 00:02:20.423 # Fallback 00:02:20.423 container=$agent 00:02:20.423 fi 00:02:20.423 fi 00:02:20.423 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.423 00:02:20.693 [Pipeline] } 00:02:20.715 [Pipeline] // withEnv 00:02:20.725 [Pipeline] setCustomBuildProperty 00:02:20.781 [Pipeline] stage 00:02:20.784 [Pipeline] { (Tests) 00:02:20.804 [Pipeline] sh 00:02:21.082 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.355 [Pipeline] sh 00:02:21.633 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:21.906 [Pipeline] timeout 00:02:21.906 Timeout set to expire in 40 min 00:02:21.908 [Pipeline] { 00:02:21.927 [Pipeline] sh 00:02:22.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:22.820 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:22.834 [Pipeline] sh 00:02:23.112 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:23.382 [Pipeline] sh 00:02:23.655 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:23.926 [Pipeline] sh 00:02:24.199 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:24.457 ++ readlink -f spdk_repo 00:02:24.457 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:24.457 + [[ -n /home/vagrant/spdk_repo ]] 00:02:24.457 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:24.457 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:24.457 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:24.457 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:24.457 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:24.457 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:24.457 + cd /home/vagrant/spdk_repo 00:02:24.457 + source /etc/os-release 00:02:24.457 ++ NAME='Fedora Linux' 00:02:24.457 ++ VERSION='38 (Cloud Edition)' 00:02:24.457 ++ ID=fedora 00:02:24.457 ++ VERSION_ID=38 00:02:24.457 ++ VERSION_CODENAME= 00:02:24.457 ++ PLATFORM_ID=platform:f38 00:02:24.457 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:24.457 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:24.457 ++ LOGO=fedora-logo-icon 00:02:24.457 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:24.457 ++ HOME_URL=https://fedoraproject.org/ 00:02:24.457 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:24.457 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:24.457 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:24.457 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:24.457 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:24.457 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:24.457 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:24.457 ++ SUPPORT_END=2024-05-14 00:02:24.457 ++ VARIANT='Cloud Edition' 00:02:24.457 ++ VARIANT_ID=cloud 00:02:24.457 + uname -a 00:02:24.457 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:24.457 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:24.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:24.973 Hugepages 00:02:24.973 node hugesize free / total 00:02:24.973 node0 1048576kB 0 / 0 00:02:24.973 node0 2048kB 0 / 0 00:02:24.973 00:02:24.973 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.973 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:24.973 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:24.973 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:25.230 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:25.230 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:25.230 + rm -f /tmp/spdk-ld-path 00:02:25.230 + source autorun-spdk.conf 00:02:25.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.230 ++ SPDK_TEST_NVME=1 00:02:25.230 ++ SPDK_TEST_FTL=1 00:02:25.230 ++ SPDK_TEST_ISAL=1 00:02:25.230 ++ SPDK_RUN_ASAN=1 00:02:25.230 ++ SPDK_RUN_UBSAN=1 00:02:25.230 ++ SPDK_TEST_XNVME=1 00:02:25.230 ++ SPDK_TEST_NVME_FDP=1 00:02:25.230 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:25.230 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.230 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.230 ++ RUN_NIGHTLY=1 00:02:25.230 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.230 + [[ -n '' ]] 00:02:25.230 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.230 + for M in /var/spdk/build-*-manifest.txt 00:02:25.230 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.230 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.230 + for M in /var/spdk/build-*-manifest.txt 00:02:25.230 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.230 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.230 ++ uname 00:02:25.230 + [[ Linux == \L\i\n\u\x ]] 00:02:25.230 + sudo dmesg -T 00:02:25.230 + sudo dmesg --clear 00:02:25.230 + dmesg_pid=6043 00:02:25.230 + sudo dmesg -Tw 00:02:25.230 + [[ Fedora Linux == FreeBSD ]] 00:02:25.230 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.230 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.230 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.230 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.230 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.230 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.230 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.230 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.230 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.230 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.230 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.230 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.230 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.230 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.230 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.230 Test configuration: 00:02:25.230 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.230 SPDK_TEST_NVME=1 00:02:25.230 SPDK_TEST_FTL=1 00:02:25.230 SPDK_TEST_ISAL=1 00:02:25.230 SPDK_RUN_ASAN=1 00:02:25.230 SPDK_RUN_UBSAN=1 00:02:25.230 SPDK_TEST_XNVME=1 00:02:25.230 SPDK_TEST_NVME_FDP=1 00:02:25.230 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:25.230 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.230 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.230 RUN_NIGHTLY=1 13:01:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.230 13:01:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.230 13:01:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.230 13:01:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.230 13:01:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.230 13:01:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.230 13:01:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.230 13:01:21 -- paths/export.sh@5 -- $ export PATH 00:02:25.230 13:01:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.230 13:01:21 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:25.230 13:01:21 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:25.230 13:01:21 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721048481.XXXXXX 00:02:25.230 13:01:21 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048481.L3NgX2 00:02:25.230 13:01:21 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:25.230 13:01:21 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:02:25.230 13:01:21 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:25.230 13:01:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:25.230 13:01:21 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:25.230 13:01:21 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:25.487 13:01:21 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:25.487 13:01:21 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:25.487 13:01:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.487 13:01:21 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:25.487 13:01:21 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:25.487 13:01:21 -- pm/common@17 -- $ local monitor 00:02:25.487 13:01:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.487 13:01:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.487 13:01:21 -- pm/common@25 -- $ sleep 1 00:02:25.487 13:01:21 -- pm/common@21 -- $ date +%s 00:02:25.487 13:01:21 -- pm/common@21 -- $ date +%s 00:02:25.487 13:01:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721048481 00:02:25.487 13:01:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721048481 00:02:25.487 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721048481_collect-vmstat.pm.log 00:02:25.487 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721048481_collect-cpu-load.pm.log 00:02:26.418 13:01:22 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:26.418 13:01:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:26.418 13:01:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:26.418 13:01:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:26.418 13:01:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:26.418 Mon Jul 15 01:01:22 PM UTC 2024 00:02:26.418 13:01:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:26.418 v24.05-13-g5fa2f5086 00:02:26.418 13:01:23 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:26.418 13:01:23 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:26.418 13:01:23 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:26.418 13:01:23 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:26.418 13:01:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.418 ************************************ 00:02:26.418 START TEST asan 00:02:26.418 ************************************ 00:02:26.418 using asan 00:02:26.418 13:01:23 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:26.418 00:02:26.418 real 0m0.000s 00:02:26.418 user 0m0.000s 00:02:26.418 sys 0m0.000s 00:02:26.418 13:01:23 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:26.418 13:01:23 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.418 ************************************ 00:02:26.418 END TEST asan 00:02:26.418 ************************************ 00:02:26.418 13:01:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:26.418 13:01:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:26.418 13:01:23 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:26.418 13:01:23 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:26.418 13:01:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.418 ************************************ 00:02:26.418 START TEST ubsan 00:02:26.418 ************************************ 00:02:26.418 using ubsan 00:02:26.418 13:01:23 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:26.418 00:02:26.418 real 0m0.000s 00:02:26.418 user 0m0.000s 00:02:26.418 sys 0m0.000s 00:02:26.418 13:01:23 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:26.418 13:01:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.418 ************************************ 00:02:26.418 END TEST ubsan 00:02:26.418 ************************************ 00:02:26.418 13:01:23 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:26.418 13:01:23 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:26.418 13:01:23 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:26.418 13:01:23 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:26.418 13:01:23 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:26.418 13:01:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.418 ************************************ 00:02:26.418 START TEST build_native_dpdk 00:02:26.418 ************************************ 00:02:26.418 13:01:23 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:26.418 caf0f5d395 version: 22.11.4 00:02:26.418 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:26.418 dc9c799c7d vhost: fix missing spinlock unlock 00:02:26.418 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:26.418 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:26.418 13:01:23 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:26.418 13:01:23 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:26.419 13:01:23 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:26.419 13:01:23 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:26.419 patching file config/rte_config.h 00:02:26.419 Hunk #1 succeeded at 60 (offset 1 line). 00:02:26.419 13:01:23 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:26.419 13:01:23 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:26.675 13:01:23 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:26.675 13:01:23 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:26.675 13:01:23 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:31.942 The Meson build system 00:02:31.942 Version: 1.3.1 00:02:31.942 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:31.942 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:31.942 Build type: native build 00:02:31.942 Program cat found: YES (/usr/bin/cat) 00:02:31.942 Project name: DPDK 00:02:31.942 Project version: 22.11.4 00:02:31.942 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.942 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:31.942 Host machine cpu family: x86_64 00:02:31.942 Host machine cpu: x86_64 00:02:31.942 Message: ## Building in Developer Mode ## 00:02:31.942 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.942 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:31.942 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.942 Program objdump found: YES (/usr/bin/objdump) 00:02:31.942 Program python3 found: YES (/usr/bin/python3) 00:02:31.942 Program cat found: YES (/usr/bin/cat) 00:02:31.942 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:31.942 Checking for size of "void *" : 8 00:02:31.942 Checking for size of "void *" : 8 (cached) 00:02:31.942 Library m found: YES 00:02:31.942 Library numa found: YES 00:02:31.942 Has header "numaif.h" : YES 00:02:31.942 Library fdt found: NO 00:02:31.942 Library execinfo found: NO 00:02:31.942 Has header "execinfo.h" : YES 00:02:31.942 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.942 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.942 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.942 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.942 Run-time dependency openssl found: YES 3.0.9 00:02:31.942 Run-time dependency libpcap found: YES 1.10.4 00:02:31.942 Has header "pcap.h" with dependency libpcap: YES 00:02:31.942 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.942 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.942 Compiler for C supports arguments -Wformat: YES 00:02:31.942 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.942 Compiler for C supports arguments -Wformat-security: NO 00:02:31.942 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.942 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.942 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.942 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.942 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.942 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.942 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.942 Compiler for C supports arguments -Wundef: YES 00:02:31.942 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.942 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.942 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.942 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.942 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.942 Compiler for C supports arguments -mavx512f: YES 00:02:31.942 Checking if "AVX512 checking" compiles: YES 00:02:31.942 Fetching value of define "__SSE4_2__" : 1 00:02:31.942 Fetching value of define "__AES__" : 1 00:02:31.942 Fetching value of define "__AVX__" : 1 00:02:31.942 Fetching value of define "__AVX2__" : 1 00:02:31.942 Fetching value of define "__AVX512BW__" : (undefined) 00:02:31.942 Fetching value of define "__AVX512CD__" : (undefined) 00:02:31.943 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:31.943 Fetching value of define "__AVX512F__" : (undefined) 00:02:31.943 Fetching value of define "__AVX512VL__" : (undefined) 00:02:31.943 Fetching value of define "__PCLMUL__" : 1 00:02:31.943 Fetching value of define "__RDRND__" : 1 00:02:31.943 Fetching value of define "__RDSEED__" : 1 00:02:31.943 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.943 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.943 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.943 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.943 Checking for function "getentropy" : YES 00:02:31.943 Message: lib/eal: Defining dependency "eal" 00:02:31.943 Message: lib/ring: Defining dependency "ring" 00:02:31.943 Message: lib/rcu: Defining dependency "rcu" 00:02:31.943 Message: lib/mempool: Defining dependency "mempool" 00:02:31.943 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.943 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.943 Compiler for C supports arguments -mpclmul: YES 00:02:31.943 Compiler for C supports arguments -maes: YES 00:02:31.943 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.943 Compiler for C supports arguments -mavx512bw: YES 00:02:31.943 Compiler for C supports arguments -mavx512dq: YES 00:02:31.943 Compiler for C supports arguments -mavx512vl: YES 00:02:31.943 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.943 Compiler for C supports arguments -mavx2: YES 00:02:31.943 Compiler for C supports arguments -mavx: YES 00:02:31.943 Message: lib/net: Defining dependency "net" 00:02:31.943 Message: lib/meter: Defining dependency "meter" 00:02:31.943 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.943 Message: lib/pci: Defining dependency "pci" 00:02:31.943 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.943 Message: lib/metrics: Defining dependency "metrics" 00:02:31.943 Message: lib/hash: Defining dependency "hash" 00:02:31.943 Message: lib/timer: Defining dependency "timer" 00:02:31.943 Fetching value of define "__AVX2__" : 1 (cached) 00:02:31.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:31.943 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:31.943 Message: lib/acl: Defining dependency "acl" 00:02:31.943 Message: lib/bbdev: Defining dependency "bbdev" 00:02:31.943 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:31.943 Run-time dependency libelf found: YES 0.190 00:02:31.943 Message: lib/bpf: Defining dependency "bpf" 00:02:31.943 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:31.943 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.943 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.943 Message: lib/distributor: Defining dependency "distributor" 00:02:31.943 Message: lib/efd: Defining dependency "efd" 00:02:31.943 Message: lib/eventdev: Defining dependency "eventdev" 00:02:31.943 Message: lib/gpudev: Defining dependency "gpudev" 00:02:31.943 Message: lib/gro: Defining dependency "gro" 00:02:31.943 Message: lib/gso: Defining dependency "gso" 00:02:31.943 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:31.943 Message: lib/jobstats: Defining dependency "jobstats" 00:02:31.943 Message: lib/latencystats: Defining dependency "latencystats" 00:02:31.943 Message: lib/lpm: Defining dependency "lpm" 00:02:31.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:31.943 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:31.943 Message: lib/member: Defining dependency "member" 00:02:31.943 Message: lib/pcapng: Defining dependency "pcapng" 00:02:31.943 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.943 Message: lib/power: Defining dependency "power" 00:02:31.943 Message: lib/rawdev: Defining dependency "rawdev" 00:02:31.943 Message: lib/regexdev: Defining dependency "regexdev" 00:02:31.943 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.943 Message: lib/rib: Defining dependency "rib" 00:02:31.943 Message: lib/reorder: Defining dependency "reorder" 00:02:31.943 Message: lib/sched: Defining dependency "sched" 00:02:31.943 Message: lib/security: Defining dependency "security" 00:02:31.943 Message: lib/stack: Defining dependency "stack" 00:02:31.943 Has header "linux/userfaultfd.h" : YES 00:02:31.943 Message: lib/vhost: Defining dependency "vhost" 00:02:31.943 Message: lib/ipsec: Defining dependency "ipsec" 00:02:31.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.943 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.943 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:31.943 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:31.943 Message: lib/fib: Defining dependency "fib" 00:02:31.943 Message: lib/port: Defining dependency "port" 00:02:31.943 Message: lib/pdump: Defining dependency "pdump" 00:02:31.943 Message: lib/table: Defining dependency "table" 00:02:31.943 Message: lib/pipeline: Defining dependency "pipeline" 00:02:31.943 Message: lib/graph: Defining dependency "graph" 00:02:31.943 Message: lib/node: Defining dependency "node" 00:02:31.943 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:31.943 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.943 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.943 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.943 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:31.943 Compiler for C supports arguments -Wno-unused-value: YES 00:02:31.943 Compiler for C supports arguments -Wno-format: YES 00:02:31.943 Compiler for C supports arguments -Wno-format-security: YES 00:02:31.943 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:33.315 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.315 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:33.315 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:33.315 Fetching value of define "__AVX2__" : 1 (cached) 00:02:33.315 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.315 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.315 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:33.315 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:33.315 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:33.315 Program doxygen found: YES (/usr/bin/doxygen) 00:02:33.315 Configuring doxy-api.conf using configuration 00:02:33.315 Program sphinx-build found: NO 00:02:33.315 Configuring rte_build_config.h using configuration 00:02:33.315 Message: 00:02:33.315 ================= 00:02:33.315 Applications Enabled 00:02:33.315 ================= 00:02:33.315 00:02:33.315 apps: 00:02:33.315 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:33.315 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:33.315 test-security-perf, 00:02:33.315 00:02:33.315 Message: 00:02:33.315 ================= 00:02:33.315 Libraries Enabled 00:02:33.315 ================= 00:02:33.315 00:02:33.315 libs: 00:02:33.315 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:33.315 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:33.315 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:33.315 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:33.315 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:33.315 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:33.315 table, pipeline, graph, node, 00:02:33.315 00:02:33.315 Message: 00:02:33.315 =============== 00:02:33.315 Drivers Enabled 00:02:33.315 =============== 00:02:33.315 00:02:33.315 common: 00:02:33.315 00:02:33.315 bus: 00:02:33.315 pci, vdev, 00:02:33.315 mempool: 00:02:33.315 ring, 00:02:33.315 dma: 00:02:33.315 00:02:33.315 net: 00:02:33.315 i40e, 00:02:33.315 raw: 00:02:33.315 00:02:33.315 crypto: 00:02:33.315 00:02:33.315 compress: 00:02:33.315 00:02:33.315 regex: 00:02:33.315 00:02:33.315 vdpa: 00:02:33.315 00:02:33.315 event: 00:02:33.315 00:02:33.315 baseband: 00:02:33.315 00:02:33.315 gpu: 00:02:33.315 00:02:33.315 00:02:33.315 Message: 00:02:33.315 ================= 00:02:33.315 Content Skipped 00:02:33.315 ================= 00:02:33.315 00:02:33.315 apps: 00:02:33.315 00:02:33.315 libs: 00:02:33.315 kni: explicitly disabled via build config (deprecated lib) 00:02:33.315 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:33.315 00:02:33.315 drivers: 00:02:33.315 common/cpt: not in enabled drivers build config 00:02:33.315 common/dpaax: not in enabled drivers build config 00:02:33.315 common/iavf: not in enabled drivers build config 00:02:33.315 common/idpf: not in enabled drivers build config 00:02:33.315 common/mvep: not in enabled drivers build config 00:02:33.315 common/octeontx: not in enabled drivers build config 00:02:33.315 bus/auxiliary: not in enabled drivers build config 00:02:33.315 bus/dpaa: not in enabled drivers build config 00:02:33.315 bus/fslmc: not in enabled drivers build config 00:02:33.315 bus/ifpga: not in enabled drivers build config 00:02:33.315 bus/vmbus: not in enabled drivers build config 00:02:33.315 common/cnxk: not in enabled drivers build config 00:02:33.315 common/mlx5: not in enabled drivers build config 00:02:33.315 common/qat: not in enabled drivers build config 00:02:33.315 common/sfc_efx: not in enabled drivers build config 00:02:33.315 mempool/bucket: not in enabled drivers build config 00:02:33.316 mempool/cnxk: not in enabled drivers build config 00:02:33.316 mempool/dpaa: not in enabled drivers build config 00:02:33.316 mempool/dpaa2: not in enabled drivers build config 00:02:33.316 mempool/octeontx: not in enabled drivers build config 00:02:33.316 mempool/stack: not in enabled drivers build config 00:02:33.316 dma/cnxk: not in enabled drivers build config 00:02:33.316 dma/dpaa: not in enabled drivers build config 00:02:33.316 dma/dpaa2: not in enabled drivers build config 00:02:33.316 dma/hisilicon: not in enabled drivers build config 00:02:33.316 dma/idxd: not in enabled drivers build config 00:02:33.316 dma/ioat: not in enabled drivers build config 00:02:33.316 dma/skeleton: not in enabled drivers build config 00:02:33.316 net/af_packet: not in enabled drivers build config 00:02:33.316 net/af_xdp: not in enabled drivers build config 00:02:33.316 net/ark: not in enabled drivers build config 00:02:33.316 net/atlantic: not in enabled drivers build config 00:02:33.316 net/avp: not in enabled drivers build config 00:02:33.316 net/axgbe: not in enabled drivers build config 00:02:33.316 net/bnx2x: not in enabled drivers build config 00:02:33.343 net/bnxt: not in enabled drivers build config 00:02:33.343 net/bonding: not in enabled drivers build config 00:02:33.343 net/cnxk: not in enabled drivers build config 00:02:33.343 net/cxgbe: not in enabled drivers build config 00:02:33.343 net/dpaa: not in enabled drivers build config 00:02:33.343 net/dpaa2: not in enabled drivers build config 00:02:33.343 net/e1000: not in enabled drivers build config 00:02:33.343 net/ena: not in enabled drivers build config 00:02:33.343 net/enetc: not in enabled drivers build config 00:02:33.343 net/enetfec: not in enabled drivers build config 00:02:33.343 net/enic: not in enabled drivers build config 00:02:33.343 net/failsafe: not in enabled drivers build config 00:02:33.343 net/fm10k: not in enabled drivers build config 00:02:33.343 net/gve: not in enabled drivers build config 00:02:33.343 net/hinic: not in enabled drivers build config 00:02:33.343 net/hns3: not in enabled drivers build config 00:02:33.343 net/iavf: not in enabled drivers build config 00:02:33.343 net/ice: not in enabled drivers build config 00:02:33.343 net/idpf: not in enabled drivers build config 00:02:33.343 net/igc: not in enabled drivers build config 00:02:33.343 net/ionic: not in enabled drivers build config 00:02:33.343 net/ipn3ke: not in enabled drivers build config 00:02:33.343 net/ixgbe: not in enabled drivers build config 00:02:33.343 net/kni: not in enabled drivers build config 00:02:33.343 net/liquidio: not in enabled drivers build config 00:02:33.343 net/mana: not in enabled drivers build config 00:02:33.343 net/memif: not in enabled drivers build config 00:02:33.343 net/mlx4: not in enabled drivers build config 00:02:33.343 net/mlx5: not in enabled drivers build config 00:02:33.343 net/mvneta: not in enabled drivers build config 00:02:33.343 net/mvpp2: not in enabled drivers build config 00:02:33.343 net/netvsc: not in enabled drivers build config 00:02:33.343 net/nfb: not in enabled drivers build config 00:02:33.343 net/nfp: not in enabled drivers build config 00:02:33.343 net/ngbe: not in enabled drivers build config 00:02:33.343 net/null: not in enabled drivers build config 00:02:33.343 net/octeontx: not in enabled drivers build config 00:02:33.343 net/octeon_ep: not in enabled drivers build config 00:02:33.343 net/pcap: not in enabled drivers build config 00:02:33.343 net/pfe: not in enabled drivers build config 00:02:33.343 net/qede: not in enabled drivers build config 00:02:33.343 net/ring: not in enabled drivers build config 00:02:33.343 net/sfc: not in enabled drivers build config 00:02:33.343 net/softnic: not in enabled drivers build config 00:02:33.343 net/tap: not in enabled drivers build config 00:02:33.343 net/thunderx: not in enabled drivers build config 00:02:33.343 net/txgbe: not in enabled drivers build config 00:02:33.343 net/vdev_netvsc: not in enabled drivers build config 00:02:33.344 net/vhost: not in enabled drivers build config 00:02:33.344 net/virtio: not in enabled drivers build config 00:02:33.344 net/vmxnet3: not in enabled drivers build config 00:02:33.344 raw/cnxk_bphy: not in enabled drivers build config 00:02:33.344 raw/cnxk_gpio: not in enabled drivers build config 00:02:33.344 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:33.344 raw/ifpga: not in enabled drivers build config 00:02:33.344 raw/ntb: not in enabled drivers build config 00:02:33.344 raw/skeleton: not in enabled drivers build config 00:02:33.344 crypto/armv8: not in enabled drivers build config 00:02:33.344 crypto/bcmfs: not in enabled drivers build config 00:02:33.344 crypto/caam_jr: not in enabled drivers build config 00:02:33.344 crypto/ccp: not in enabled drivers build config 00:02:33.344 crypto/cnxk: not in enabled drivers build config 00:02:33.344 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.344 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.344 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.344 crypto/mlx5: not in enabled drivers build config 00:02:33.344 crypto/mvsam: not in enabled drivers build config 00:02:33.344 crypto/nitrox: not in enabled drivers build config 00:02:33.344 crypto/null: not in enabled drivers build config 00:02:33.344 crypto/octeontx: not in enabled drivers build config 00:02:33.344 crypto/openssl: not in enabled drivers build config 00:02:33.344 crypto/scheduler: not in enabled drivers build config 00:02:33.344 crypto/uadk: not in enabled drivers build config 00:02:33.344 crypto/virtio: not in enabled drivers build config 00:02:33.344 compress/isal: not in enabled drivers build config 00:02:33.344 compress/mlx5: not in enabled drivers build config 00:02:33.344 compress/octeontx: not in enabled drivers build config 00:02:33.344 compress/zlib: not in enabled drivers build config 00:02:33.344 regex/mlx5: not in enabled drivers build config 00:02:33.344 regex/cn9k: not in enabled drivers build config 00:02:33.344 vdpa/ifc: not in enabled drivers build config 00:02:33.344 vdpa/mlx5: not in enabled drivers build config 00:02:33.344 vdpa/sfc: not in enabled drivers build config 00:02:33.344 event/cnxk: not in enabled drivers build config 00:02:33.344 event/dlb2: not in enabled drivers build config 00:02:33.344 event/dpaa: not in enabled drivers build config 00:02:33.344 event/dpaa2: not in enabled drivers build config 00:02:33.344 event/dsw: not in enabled drivers build config 00:02:33.344 event/opdl: not in enabled drivers build config 00:02:33.344 event/skeleton: not in enabled drivers build config 00:02:33.344 event/sw: not in enabled drivers build config 00:02:33.344 event/octeontx: not in enabled drivers build config 00:02:33.344 baseband/acc: not in enabled drivers build config 00:02:33.344 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:33.344 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:33.344 baseband/la12xx: not in enabled drivers build config 00:02:33.344 baseband/null: not in enabled drivers build config 00:02:33.344 baseband/turbo_sw: not in enabled drivers build config 00:02:33.344 gpu/cuda: not in enabled drivers build config 00:02:33.344 00:02:33.344 00:02:33.344 Build targets in project: 314 00:02:33.344 00:02:33.344 DPDK 22.11.4 00:02:33.344 00:02:33.344 User defined options 00:02:33.344 libdir : lib 00:02:33.344 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:33.344 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:33.344 c_link_args : 00:02:33.344 enable_docs : false 00:02:33.344 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:33.344 enable_kmods : false 00:02:33.344 machine : native 00:02:33.344 tests : false 00:02:33.344 00:02:33.344 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.344 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:33.344 13:01:29 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:33.344 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:33.344 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:33.344 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:33.344 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:33.344 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:33.344 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.344 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.603 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.603 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:33.603 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.603 [10/743] Linking static target lib/librte_kvargs.a 00:02:33.603 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.603 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.603 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.603 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.603 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.603 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.603 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.603 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.603 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.909 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.909 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.909 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:33.909 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.909 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:33.909 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.909 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.909 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.909 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.909 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.909 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.909 [31/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.909 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.909 [33/743] Linking static target lib/librte_telemetry.a 00:02:34.166 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.166 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.166 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.166 [37/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:34.167 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.167 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:34.167 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.167 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.424 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:34.424 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.424 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:34.424 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.424 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:34.424 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:34.424 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:34.713 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.713 [50/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.713 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.713 [52/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:34.714 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.714 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:34.714 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:34.714 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:34.714 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:34.714 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.714 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.714 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:34.714 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:34.714 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:34.714 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.714 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.714 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:34.714 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.972 [67/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:34.972 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.972 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.972 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.972 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.972 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.972 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:34.972 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.972 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.972 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.972 [77/743] Generating lib/rte_eal_mingw with a custom command 00:02:34.972 [78/743] Generating lib/rte_eal_def with a custom command 00:02:34.972 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.972 [80/743] Generating lib/rte_ring_mingw with a custom command 00:02:34.972 [81/743] Generating lib/rte_ring_def with a custom command 00:02:34.972 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.230 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:35.230 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.230 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:35.230 [86/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.230 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.230 [88/743] Linking static target lib/librte_ring.a 00:02:35.230 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.230 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:35.230 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:35.487 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.487 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.487 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.487 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.744 [96/743] Linking static target lib/librte_eal.a 00:02:35.744 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.744 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:35.744 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:35.744 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.002 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.002 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:36.002 [103/743] Linking static target lib/librte_rcu.a 00:02:36.002 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:36.002 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.260 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:36.260 [107/743] Linking static target lib/librte_mempool.a 00:02:36.260 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.260 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.260 [110/743] Generating lib/rte_net_def with a custom command 00:02:36.260 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:36.518 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:36.518 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:36.518 [114/743] Generating lib/rte_meter_def with a custom command 00:02:36.518 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:36.518 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:36.518 [117/743] Linking static target lib/librte_meter.a 00:02:36.518 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.775 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.775 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.775 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.775 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:37.032 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.032 [124/743] Linking static target lib/librte_net.a 00:02:37.033 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.033 [126/743] Linking static target lib/librte_mbuf.a 00:02:37.033 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.290 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.290 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.290 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.290 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.290 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.548 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:37.548 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.807 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.080 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.080 [137/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.080 [138/743] Generating lib/rte_ethdev_def with a custom command 00:02:38.080 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:38.080 [140/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.080 [141/743] Generating lib/rte_pci_def with a custom command 00:02:38.341 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:38.341 [143/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.341 [144/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.341 [145/743] Linking static target lib/librte_pci.a 00:02:38.341 [146/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.341 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.341 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.341 [149/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.599 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.599 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.599 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.599 [153/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.599 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.599 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.599 [156/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.599 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.599 [158/743] Generating lib/rte_cmdline_def with a custom command 00:02:38.599 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:38.599 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:38.599 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:38.599 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:38.857 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:38.857 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:38.857 [165/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.857 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.857 [167/743] Generating lib/rte_hash_def with a custom command 00:02:38.857 [168/743] Generating lib/rte_hash_mingw with a custom command 00:02:38.857 [169/743] Generating lib/rte_timer_def with a custom command 00:02:38.857 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:39.115 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.115 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.115 [173/743] Linking static target lib/librte_cmdline.a 00:02:39.373 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:39.373 [175/743] Linking static target lib/librte_metrics.a 00:02:39.373 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.373 [177/743] Linking static target lib/librte_timer.a 00:02:39.939 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.939 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.939 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.939 [181/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.939 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:39.939 [183/743] Linking static target lib/librte_ethdev.a 00:02:39.939 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.507 [185/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:40.507 [186/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:40.507 [187/743] Generating lib/rte_acl_def with a custom command 00:02:40.764 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:40.764 [189/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:40.764 [190/743] Generating lib/rte_bbdev_def with a custom command 00:02:40.764 [191/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:40.764 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:40.764 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:41.022 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:41.280 [195/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:41.537 [196/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:41.537 [197/743] Linking static target lib/librte_bitratestats.a 00:02:41.537 [198/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:41.537 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:41.537 [200/743] Linking static target lib/librte_bbdev.a 00:02:41.794 [201/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.051 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.051 [203/743] Linking static target lib/librte_hash.a 00:02:42.308 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:42.308 [205/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:42.308 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:42.308 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:42.308 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:42.308 [209/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.874 [210/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:42.874 [211/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.874 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:42.874 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:42.874 [214/743] Generating lib/rte_bpf_mingw with a custom command 00:02:42.874 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:42.874 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:42.874 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:42.874 [218/743] Linking static target lib/librte_acl.a 00:02:42.874 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:43.131 [220/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:43.131 [221/743] Generating lib/rte_compressdev_def with a custom command 00:02:43.131 [222/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:43.131 [223/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:43.131 [224/743] Linking static target lib/librte_cfgfile.a 00:02:43.131 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.389 [226/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.389 [227/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:43.389 [228/743] Generating lib/rte_cryptodev_def with a custom command 00:02:43.646 [229/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:43.646 [230/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.646 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:43.646 [232/743] Linking static target lib/librte_bpf.a 00:02:43.646 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.646 [234/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.903 [235/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.903 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.903 [237/743] Linking static target lib/librte_compressdev.a 00:02:43.903 [238/743] Generating lib/rte_distributor_def with a custom command 00:02:43.903 [239/743] Linking target lib/librte_eal.so.23.0 00:02:43.903 [240/743] Generating lib/rte_distributor_mingw with a custom command 00:02:43.903 [241/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.163 [242/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:44.163 [243/743] Generating lib/rte_efd_def with a custom command 00:02:44.163 [244/743] Linking target lib/librte_ring.so.23.0 00:02:44.163 [245/743] Linking target lib/librte_meter.so.23.0 00:02:44.163 [246/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.163 [247/743] Linking target lib/librte_pci.so.23.0 00:02:44.163 [248/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:44.163 [249/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:44.163 [250/743] Linking target lib/librte_rcu.so.23.0 00:02:44.421 [251/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:44.421 [252/743] Linking target lib/librte_mempool.so.23.0 00:02:44.421 [253/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:44.421 [254/743] Linking target lib/librte_timer.so.23.0 00:02:44.421 [255/743] Linking target lib/librte_acl.so.23.0 00:02:44.421 [256/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:44.421 [257/743] Linking target lib/librte_cfgfile.so.23.0 00:02:44.421 [258/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:44.421 [259/743] Generating lib/rte_efd_mingw with a custom command 00:02:44.421 [260/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:44.679 [261/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:44.679 [262/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:44.679 [263/743] Linking target lib/librte_mbuf.so.23.0 00:02:44.679 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:44.679 [265/743] Linking static target lib/librte_distributor.a 00:02:44.679 [266/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:44.679 [267/743] Linking target lib/librte_net.so.23.0 00:02:44.937 [268/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.937 [269/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.937 [270/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:44.937 [271/743] Linking target lib/librte_bbdev.so.23.0 00:02:44.937 [272/743] Linking target lib/librte_cmdline.so.23.0 00:02:44.937 [273/743] Linking target lib/librte_compressdev.so.23.0 00:02:44.937 [274/743] Linking target lib/librte_distributor.so.23.0 00:02:44.937 [275/743] Linking target lib/librte_hash.so.23.0 00:02:45.195 [276/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:45.195 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:45.451 [278/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.451 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:45.451 [280/743] Generating lib/rte_eventdev_def with a custom command 00:02:45.451 [281/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:45.451 [282/743] Linking target lib/librte_ethdev.so.23.0 00:02:45.708 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:45.708 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:45.708 [285/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:45.708 [286/743] Linking target lib/librte_bpf.so.23.0 00:02:45.708 [287/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:45.708 [288/743] Linking static target lib/librte_efd.a 00:02:45.964 [289/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:45.964 [290/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:45.964 [291/743] Generating lib/rte_gpudev_def with a custom command 00:02:45.964 [292/743] Linking target lib/librte_bitratestats.so.23.0 00:02:45.964 [293/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:45.964 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:45.964 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.221 [296/743] Linking target lib/librte_efd.so.23.0 00:02:46.221 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.221 [298/743] Linking static target lib/librte_cryptodev.a 00:02:46.478 [299/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:46.478 [300/743] Linking static target lib/librte_gpudev.a 00:02:46.478 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:46.750 [302/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:46.750 [303/743] Generating lib/rte_gro_def with a custom command 00:02:47.007 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:47.007 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:47.007 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:47.007 [307/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:47.264 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:47.521 [309/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:47.521 [310/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.521 [311/743] Linking target lib/librte_gpudev.so.23.0 00:02:47.521 [312/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:47.521 [313/743] Generating lib/rte_gso_def with a custom command 00:02:47.521 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:47.778 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:47.778 [316/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:47.778 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:47.778 [318/743] Linking static target lib/librte_eventdev.a 00:02:47.778 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:48.035 [320/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:48.035 [321/743] Linking static target lib/librte_gro.a 00:02:48.295 [322/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.295 [323/743] Linking target lib/librte_gro.so.23.0 00:02:48.295 [324/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:48.295 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:02:48.552 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:48.552 [327/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:48.552 [328/743] Generating lib/rte_jobstats_def with a custom command 00:02:48.552 [329/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:48.552 [330/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:48.552 [331/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:48.552 [332/743] Linking static target lib/librte_jobstats.a 00:02:48.552 [333/743] Generating lib/rte_latencystats_def with a custom command 00:02:48.552 [334/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:48.552 [335/743] Linking static target lib/librte_gso.a 00:02:48.552 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:48.552 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:48.809 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:48.809 [339/743] Generating lib/rte_lpm_def with a custom command 00:02:48.809 [340/743] Generating lib/rte_lpm_mingw with a custom command 00:02:48.809 [341/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.809 [342/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.809 [343/743] Linking target lib/librte_gso.so.23.0 00:02:49.074 [344/743] Linking target lib/librte_cryptodev.so.23.0 00:02:49.074 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.074 [346/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.074 [347/743] Linking target lib/librte_jobstats.so.23.0 00:02:49.074 [348/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:49.354 [349/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:49.354 [350/743] Linking static target lib/librte_ip_frag.a 00:02:49.354 [351/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:49.354 [352/743] Linking static target lib/librte_latencystats.a 00:02:49.611 [353/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.611 [354/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:49.611 [355/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:49.611 [356/743] Linking target lib/librte_latencystats.so.23.0 00:02:49.611 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:49.611 [358/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.611 [359/743] Generating lib/rte_member_def with a custom command 00:02:49.611 [360/743] Linking target lib/librte_ip_frag.so.23.0 00:02:49.611 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:49.867 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:49.867 [363/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:49.867 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.124 [365/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:50.124 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.124 [367/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.124 [368/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:50.124 [369/743] Linking static target lib/librte_lpm.a 00:02:50.381 [370/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.381 [371/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.381 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:50.381 [373/743] Linking target lib/librte_eventdev.so.23.0 00:02:50.381 [374/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.638 [375/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:50.638 [376/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.638 [377/743] Generating lib/rte_power_def with a custom command 00:02:50.638 [378/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:50.638 [379/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.638 [380/743] Linking static target lib/librte_pcapng.a 00:02:50.638 [381/743] Generating lib/rte_power_mingw with a custom command 00:02:50.924 [382/743] Linking target lib/librte_lpm.so.23.0 00:02:50.924 [383/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:50.924 [384/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:50.924 [385/743] Generating lib/rte_rawdev_def with a custom command 00:02:50.924 [386/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:50.924 [387/743] Generating lib/rte_regexdev_def with a custom command 00:02:50.924 [388/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:50.924 [389/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:51.183 [390/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.183 [391/743] Generating lib/rte_dmadev_def with a custom command 00:02:51.183 [392/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:51.183 [393/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.183 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:51.183 [395/743] Linking static target lib/librte_rawdev.a 00:02:51.183 [396/743] Linking target lib/librte_pcapng.so.23.0 00:02:51.441 [397/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:51.441 [398/743] Generating lib/rte_rib_def with a custom command 00:02:51.441 [399/743] Generating lib/rte_rib_mingw with a custom command 00:02:51.441 [400/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:51.441 [401/743] Generating lib/rte_reorder_def with a custom command 00:02:51.700 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.700 [403/743] Generating lib/rte_reorder_mingw with a custom command 00:02:51.700 [404/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.700 [405/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:51.700 [406/743] Linking static target lib/librte_power.a 00:02:51.700 [407/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.700 [408/743] Linking static target lib/librte_dmadev.a 00:02:51.701 [409/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.701 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:51.958 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:51.958 [412/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:51.958 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:51.958 [414/743] Generating lib/rte_sched_def with a custom command 00:02:51.958 [415/743] Linking static target lib/librte_regexdev.a 00:02:51.958 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:51.958 [417/743] Generating lib/rte_security_def with a custom command 00:02:51.958 [418/743] Generating lib/rte_security_mingw with a custom command 00:02:52.215 [419/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:52.215 [420/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:52.215 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:52.215 [422/743] Generating lib/rte_stack_def with a custom command 00:02:52.215 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:52.215 [424/743] Generating lib/rte_stack_mingw with a custom command 00:02:52.215 [425/743] Linking static target lib/librte_stack.a 00:02:52.473 [426/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.473 [427/743] Linking target lib/librte_dmadev.so.23.0 00:02:52.473 [428/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.730 [429/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.730 [430/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.730 [431/743] Linking static target lib/librte_reorder.a 00:02:52.730 [432/743] Linking target lib/librte_stack.so.23.0 00:02:52.730 [433/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:52.730 [434/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.730 [435/743] Linking static target lib/librte_security.a 00:02:52.988 [436/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.988 [437/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.988 [438/743] Linking target lib/librte_regexdev.so.23.0 00:02:52.988 [439/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.988 [440/743] Linking target lib/librte_power.so.23.0 00:02:52.988 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:53.247 [442/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.247 [443/743] Linking static target lib/librte_member.a 00:02:53.247 [444/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.247 [445/743] Linking static target lib/librte_rib.a 00:02:53.505 [446/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.505 [447/743] Linking target lib/librte_security.so.23.0 00:02:53.505 [448/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.763 [449/743] Linking target lib/librte_member.so.23.0 00:02:53.763 [450/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:53.763 [451/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.763 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:53.763 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.763 [454/743] Generating lib/rte_vhost_mingw with a custom command 00:02:53.763 [455/743] Linking target lib/librte_rib.so.23.0 00:02:53.763 [456/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.021 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.021 [458/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:54.586 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.586 [460/743] Linking static target lib/librte_sched.a 00:02:54.843 [461/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.843 [462/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.843 [463/743] Generating lib/rte_ipsec_def with a custom command 00:02:54.843 [464/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:55.101 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:55.101 [466/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.358 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.358 [468/743] Linking target lib/librte_sched.so.23.0 00:02:55.358 [469/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:55.358 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.358 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.615 [472/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.615 [473/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:55.615 [474/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:55.615 [475/743] Generating lib/rte_fib_def with a custom command 00:02:55.615 [476/743] Generating lib/rte_fib_mingw with a custom command 00:02:55.873 [477/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:55.873 [478/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:55.873 [479/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:56.131 [480/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:56.705 [481/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.705 [482/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.965 [483/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.965 [484/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.965 [485/743] Linking static target lib/librte_fib.a 00:02:56.965 [486/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.965 [487/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.965 [488/743] Linking static target lib/librte_ipsec.a 00:02:57.222 [489/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.480 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:57.480 [491/743] Linking target lib/librte_fib.so.23.0 00:02:57.480 [492/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:57.737 [493/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.737 [494/743] Linking target lib/librte_ipsec.so.23.0 00:02:57.994 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:57.994 [496/743] Generating lib/rte_port_def with a custom command 00:02:57.994 [497/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:57.994 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:57.994 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:58.253 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:58.253 [501/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:58.510 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.510 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:58.768 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:58.768 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:58.768 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:58.768 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:59.026 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:59.026 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:59.026 [510/743] Linking static target lib/librte_port.a 00:02:59.284 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:59.284 [512/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:59.284 [513/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:59.541 [514/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:59.541 [515/743] Linking static target lib/librte_pdump.a 00:02:59.541 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:59.542 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:59.799 [518/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.799 [519/743] Linking target lib/librte_port.so.23.0 00:02:59.799 [520/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.799 [521/743] Linking target lib/librte_pdump.so.23.0 00:03:00.057 [522/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:00.314 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:00.314 [524/743] Generating lib/rte_table_def with a custom command 00:03:00.314 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:00.571 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:00.571 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:00.571 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:00.829 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:00.829 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:00.829 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:00.829 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:00.829 [533/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:01.087 [534/743] Linking static target lib/librte_table.a 00:03:01.087 [535/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:01.355 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:01.633 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:01.633 [538/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.892 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.150 [540/743] Linking target lib/librte_table.so.23.0 00:03:02.150 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:02.150 [542/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:02.150 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:02.408 [544/743] Generating lib/rte_graph_def with a custom command 00:03:02.408 [545/743] Generating lib/rte_graph_mingw with a custom command 00:03:02.408 [546/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:02.408 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:02.666 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:02.923 [549/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:03.181 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:03.181 [551/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:03.438 [552/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:03.438 [553/743] Linking static target lib/librte_graph.a 00:03:03.438 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:03.696 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:03.696 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:03.696 [557/743] Generating lib/rte_node_def with a custom command 00:03:03.953 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:03.953 [559/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:03.953 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.211 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.211 [562/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:04.211 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.469 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:04.469 [565/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:04.469 [566/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:04.469 [567/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:04.469 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.469 [569/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.469 [570/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:04.469 [571/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.469 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:04.726 [573/743] Linking target lib/librte_graph.so.23.0 00:03:04.726 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:04.726 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:04.726 [576/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.726 [577/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.726 [578/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:04.984 [579/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:04.984 [580/743] Linking static target lib/librte_node.a 00:03:04.984 [581/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.984 [582/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.984 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.984 [584/743] Linking static target drivers/librte_bus_vdev.a 00:03:04.984 [585/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.984 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.984 [587/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.241 [588/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.241 [589/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.241 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.241 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:05.241 [592/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.498 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.498 [594/743] Linking target lib/librte_node.so.23.0 00:03:05.498 [595/743] Linking static target drivers/librte_bus_pci.a 00:03:05.498 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:05.912 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.912 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:06.173 [599/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.173 [600/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.173 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.173 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:06.173 [603/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:06.173 [604/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.173 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.173 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.173 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:06.173 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.430 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:06.430 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:06.993 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:07.249 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:07.814 [613/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:08.070 [614/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.070 [615/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:08.327 [616/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:08.327 [617/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:08.585 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:08.585 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:08.585 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:09.517 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:09.517 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:09.517 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:09.517 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:09.775 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:10.720 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:10.720 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:10.720 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:10.720 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:10.991 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:10.991 [631/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:10.991 [632/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:11.248 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:11.248 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.249 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:11.506 [636/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.506 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:11.506 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:11.506 [639/743] Linking static target lib/librte_vhost.a 00:03:11.763 [640/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:11.763 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:11.763 [642/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.763 [643/743] Linking static target drivers/librte_net_i40e.a 00:03:12.020 [644/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:12.020 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:12.278 [646/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:12.278 [647/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:12.535 [648/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.535 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:12.535 [650/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:12.792 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:12.792 [652/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.792 [653/743] Linking target lib/librte_vhost.so.23.0 00:03:13.049 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:13.050 [655/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:13.307 [656/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:13.307 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:13.307 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:13.563 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:13.820 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:13.820 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:13.820 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:13.820 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:14.077 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:14.077 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:14.077 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:14.334 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.334 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:14.590 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:14.590 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:14.846 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:14.846 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:15.103 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:15.360 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:15.924 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:15.924 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:15.924 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:15.924 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:16.181 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:16.438 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.438 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.438 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:16.696 [683/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:16.696 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:16.954 [685/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:16.954 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:16.954 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:16.954 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:17.519 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:17.519 [690/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.519 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:17.519 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:17.519 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.519 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:18.084 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:18.084 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:18.340 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:18.340 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:18.597 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:18.854 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:19.112 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:19.112 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:19.371 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:19.371 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:19.371 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:19.693 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:19.962 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:20.220 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:20.220 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:20.479 [710/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:20.479 [711/743] Linking static target lib/librte_pipeline.a 00:03:20.479 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:20.737 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:20.737 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:20.737 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:20.996 [716/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:20.996 [717/743] Linking target app/dpdk-dumpcap 00:03:20.996 [718/743] Linking target app/dpdk-pdump 00:03:21.255 [719/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:21.255 [720/743] Linking target app/dpdk-proc-info 00:03:21.255 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:21.255 [722/743] Linking target app/dpdk-test-bbdev 00:03:21.255 [723/743] Linking target app/dpdk-test-acl 00:03:21.514 [724/743] Linking target app/dpdk-test-compress-perf 00:03:21.514 [725/743] Linking target app/dpdk-test-cmdline 00:03:21.514 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:21.772 [727/743] Linking target app/dpdk-test-eventdev 00:03:21.772 [728/743] Linking target app/dpdk-test-flow-perf 00:03:21.772 [729/743] Linking target app/dpdk-test-crypto-perf 00:03:21.772 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:21.772 [731/743] Linking target app/dpdk-test-fib 00:03:22.030 [732/743] Linking target app/dpdk-test-gpudev 00:03:22.030 [733/743] Linking target app/dpdk-test-pipeline 00:03:22.288 [734/743] Linking target app/dpdk-testpmd 00:03:22.547 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:22.547 [736/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:22.807 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:23.065 [738/743] Linking target app/dpdk-test-sad 00:03:23.066 [739/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:23.323 [740/743] Linking target app/dpdk-test-regex 00:03:23.580 [741/743] Linking target app/dpdk-test-security-perf 00:03:23.838 [742/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.095 [743/743] Linking target lib/librte_pipeline.so.23.0 00:03:24.095 13:02:20 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:24.095 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:24.095 [0/1] Installing files. 00:03:24.664 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.664 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.665 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:24.666 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.667 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:24.668 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:24.668 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.668 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.929 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.930 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.930 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.930 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.930 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.930 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.930 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.931 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.932 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:24.933 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:24.933 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:24.933 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:24.933 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:24.933 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:24.933 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:24.933 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:24.933 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:24.933 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:24.933 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:24.933 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:24.933 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:24.933 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:24.933 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:24.933 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:24.933 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:24.933 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:24.933 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:24.933 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:24.933 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:24.933 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:24.933 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:24.933 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:24.933 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:24.933 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:24.933 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:24.933 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:24.933 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:24.933 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:24.933 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:24.933 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:24.933 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:24.933 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:24.933 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:24.933 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:24.933 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:24.933 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:24.933 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:24.933 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:24.933 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:24.933 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:24.933 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:24.933 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:24.933 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:24.933 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:24.933 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:24.933 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:24.933 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:24.933 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:24.933 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:24.933 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:24.933 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:24.933 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:24.933 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:24.933 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:24.933 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:24.933 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:24.933 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:24.933 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:24.933 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:24.933 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:24.933 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:24.933 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:24.933 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:24.933 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:24.933 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:24.933 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:24.933 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:24.933 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:24.933 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:24.933 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:24.933 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:24.933 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:24.933 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:24.933 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:24.933 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:24.933 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:24.933 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:24.933 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:24.933 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:24.933 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:24.933 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:24.933 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:24.933 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:24.933 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:24.934 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:24.934 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:24.934 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:24.934 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:24.934 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:24.934 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:24.934 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:24.934 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:24.934 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:24.934 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:24.934 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:24.934 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:24.934 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:24.934 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:24.934 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:24.934 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:24.934 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:24.934 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:24.934 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:24.934 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:24.934 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:24.934 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:24.934 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:24.934 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:24.934 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:24.934 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:24.934 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:24.934 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:24.934 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:24.934 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:24.934 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:24.934 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:24.934 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:24.934 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:24.934 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:24.934 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:24.934 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:24.934 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:24.934 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:24.934 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:24.934 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:25.191 13:02:21 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:25.191 13:02:21 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:25.191 13:02:21 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:25.191 13:02:21 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:25.191 00:03:25.191 real 0m58.587s 00:03:25.191 user 6m55.606s 00:03:25.191 sys 1m11.193s 00:03:25.191 13:02:21 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:25.191 13:02:21 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:25.191 ************************************ 00:03:25.191 END TEST build_native_dpdk 00:03:25.191 ************************************ 00:03:25.191 13:02:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:25.191 13:02:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:25.191 13:02:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:25.191 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:25.453 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.453 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:25.453 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:25.716 Using 'verbs' RDMA provider 00:03:41.957 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:54.153 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:54.153 Creating mk/config.mk...done. 00:03:54.153 Creating mk/cc.flags.mk...done. 00:03:54.153 Type 'make' to build. 00:03:54.153 13:02:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:54.153 13:02:49 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:54.153 13:02:49 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:54.153 13:02:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.153 ************************************ 00:03:54.153 START TEST make 00:03:54.153 ************************************ 00:03:54.153 13:02:49 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:54.153 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:54.153 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:54.153 meson setup builddir \ 00:03:54.153 -Dwith-libaio=enabled \ 00:03:54.153 -Dwith-liburing=enabled \ 00:03:54.153 -Dwith-libvfn=disabled \ 00:03:54.153 -Dwith-spdk=false && \ 00:03:54.153 meson compile -C builddir && \ 00:03:54.153 cd -) 00:03:54.153 make[1]: Nothing to be done for 'all'. 00:03:56.053 The Meson build system 00:03:56.053 Version: 1.3.1 00:03:56.053 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:56.053 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:56.053 Build type: native build 00:03:56.053 Project name: xnvme 00:03:56.053 Project version: 0.7.3 00:03:56.053 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:56.053 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:56.053 Host machine cpu family: x86_64 00:03:56.053 Host machine cpu: x86_64 00:03:56.053 Message: host_machine.system: linux 00:03:56.053 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:56.053 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:56.053 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:56.053 Run-time dependency threads found: YES 00:03:56.053 Has header "setupapi.h" : NO 00:03:56.053 Has header "linux/blkzoned.h" : YES 00:03:56.053 Has header "linux/blkzoned.h" : YES (cached) 00:03:56.053 Has header "libaio.h" : YES 00:03:56.053 Library aio found: YES 00:03:56.053 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:56.053 Run-time dependency liburing found: YES 2.2 00:03:56.053 Dependency libvfn skipped: feature with-libvfn disabled 00:03:56.053 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.053 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.053 Configuring xnvme_config.h using configuration 00:03:56.053 Configuring xnvme.spec using configuration 00:03:56.053 Run-time dependency bash-completion found: YES 2.11 00:03:56.053 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:56.053 Program cp found: YES (/usr/bin/cp) 00:03:56.053 Has header "winsock2.h" : NO 00:03:56.053 Has header "dbghelp.h" : NO 00:03:56.053 Library rpcrt4 found: NO 00:03:56.053 Library rt found: YES 00:03:56.053 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:56.053 Found CMake: /usr/bin/cmake (3.27.7) 00:03:56.053 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:56.053 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:56.053 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:56.053 Build targets in project: 32 00:03:56.053 00:03:56.053 xnvme 0.7.3 00:03:56.053 00:03:56.053 User defined options 00:03:56.053 with-libaio : enabled 00:03:56.053 with-liburing: enabled 00:03:56.053 with-libvfn : disabled 00:03:56.053 with-spdk : false 00:03:56.053 00:03:56.053 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.311 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:56.569 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:56.569 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:56.569 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:56.569 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:56.569 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:56.569 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:56.569 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:56.569 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:56.569 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:56.569 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:56.569 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:56.569 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:56.569 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:56.569 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:56.569 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:56.826 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:56.826 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:56.826 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:56.826 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:56.826 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:56.826 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:56.826 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:56.826 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:56.826 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:56.826 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:56.826 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:56.826 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:56.826 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:56.826 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:56.826 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:56.826 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:56.826 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:56.826 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:56.826 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:56.826 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:56.826 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:57.084 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:57.084 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:57.084 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:57.084 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:57.084 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:57.084 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:57.084 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:57.084 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:57.084 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:57.084 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:57.084 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:57.084 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:57.084 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:57.084 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:57.084 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:57.084 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:57.084 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:57.084 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:57.084 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:57.084 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:57.084 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:57.084 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:57.084 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:57.084 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:57.084 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:57.341 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:57.341 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:57.341 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:57.341 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:57.341 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:57.341 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:57.341 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:57.341 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:57.341 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:57.341 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:57.341 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:57.341 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:57.599 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:57.599 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:57.599 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:57.599 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:57.599 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:57.599 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:57.599 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:57.599 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:57.599 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:57.599 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:57.599 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:57.599 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:57.599 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:57.857 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:57.857 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:57.857 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:57.857 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:57.857 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:57.857 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:57.857 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:57.857 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:57.857 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:57.857 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:57.857 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:57.857 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:57.857 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:57.857 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:57.857 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:57.857 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:57.857 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:57.857 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:57.857 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:57.857 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:57.857 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:57.857 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:57.857 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:57.857 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:57.857 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:58.114 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:58.114 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:58.114 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:58.114 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:58.114 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:58.114 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:58.114 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:58.114 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:58.114 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:58.114 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:58.114 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:58.114 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:58.114 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:58.114 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:58.114 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:58.114 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:58.114 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:58.114 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:58.114 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:58.114 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:58.114 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:58.371 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:58.371 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:58.371 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:58.371 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:58.371 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:58.371 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:58.371 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:58.371 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:58.371 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:58.371 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:58.371 [143/203] Linking target lib/libxnvme.so 00:03:58.371 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:58.371 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:58.371 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:58.371 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:58.630 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:58.630 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:58.630 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:58.630 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:58.630 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:58.630 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:58.630 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:58.630 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:58.630 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:58.630 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:58.630 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:58.630 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:58.888 [160/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:58.888 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:58.888 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:58.888 [163/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:58.888 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:58.888 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:58.888 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:58.888 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:58.888 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:58.888 [169/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:58.888 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:58.888 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:58.888 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:59.146 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:59.146 [174/203] Linking static target lib/libxnvme.a 00:03:59.146 [175/203] Linking target tests/xnvme_tests_cli 00:03:59.146 [176/203] Linking target tests/xnvme_tests_async_intf 00:03:59.146 [177/203] Linking target tests/xnvme_tests_enum 00:03:59.146 [178/203] Linking target tests/xnvme_tests_scc 00:03:59.146 [179/203] Linking target tests/xnvme_tests_ioworker 00:03:59.146 [180/203] Linking target tests/xnvme_tests_buf 00:03:59.146 [181/203] Linking target tests/xnvme_tests_lblk 00:03:59.146 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:03:59.146 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:59.146 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:59.146 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:59.146 [186/203] Linking target tests/xnvme_tests_znd_append 00:03:59.146 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:59.146 [188/203] Linking target tests/xnvme_tests_kvs 00:03:59.404 [189/203] Linking target tests/xnvme_tests_map 00:03:59.404 [190/203] Linking target tools/lblk 00:03:59.404 [191/203] Linking target examples/xnvme_single_async 00:03:59.404 [192/203] Linking target tools/xnvme_file 00:03:59.404 [193/203] Linking target tools/kvs 00:03:59.404 [194/203] Linking target tools/xnvme 00:03:59.404 [195/203] Linking target examples/xnvme_enum 00:03:59.404 [196/203] Linking target tools/zoned 00:03:59.404 [197/203] Linking target examples/xnvme_io_async 00:03:59.404 [198/203] Linking target tools/xdd 00:03:59.404 [199/203] Linking target examples/xnvme_dev 00:03:59.404 [200/203] Linking target examples/xnvme_single_sync 00:03:59.404 [201/203] Linking target examples/xnvme_hello 00:03:59.404 [202/203] Linking target examples/zoned_io_async 00:03:59.404 [203/203] Linking target examples/zoned_io_sync 00:03:59.404 INFO: autodetecting backend as ninja 00:03:59.404 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:59.404 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:21.408 CC lib/ut_mock/mock.o 00:04:21.408 CC lib/log/log.o 00:04:21.408 CC lib/log/log_flags.o 00:04:21.408 CC lib/log/log_deprecated.o 00:04:21.408 CC lib/ut/ut.o 00:04:21.408 LIB libspdk_ut_mock.a 00:04:21.408 SO libspdk_ut_mock.so.6.0 00:04:21.408 LIB libspdk_log.a 00:04:21.408 LIB libspdk_ut.a 00:04:21.408 SO libspdk_log.so.7.0 00:04:21.408 SO libspdk_ut.so.2.0 00:04:21.408 SYMLINK libspdk_ut_mock.so 00:04:21.408 SYMLINK libspdk_ut.so 00:04:21.408 SYMLINK libspdk_log.so 00:04:21.408 CC lib/dma/dma.o 00:04:21.408 CXX lib/trace_parser/trace.o 00:04:21.408 CC lib/util/base64.o 00:04:21.408 CC lib/util/bit_array.o 00:04:21.408 CC lib/util/cpuset.o 00:04:21.408 CC lib/util/crc16.o 00:04:21.408 CC lib/util/crc32.o 00:04:21.408 CC lib/util/crc32c.o 00:04:21.408 CC lib/ioat/ioat.o 00:04:21.408 CC lib/vfio_user/host/vfio_user_pci.o 00:04:21.408 CC lib/util/crc32_ieee.o 00:04:21.408 CC lib/util/crc64.o 00:04:21.408 CC lib/util/dif.o 00:04:21.408 LIB libspdk_dma.a 00:04:21.408 CC lib/util/fd.o 00:04:21.408 SO libspdk_dma.so.4.0 00:04:21.408 CC lib/util/file.o 00:04:21.408 CC lib/vfio_user/host/vfio_user.o 00:04:21.408 CC lib/util/hexlify.o 00:04:21.408 SYMLINK libspdk_dma.so 00:04:21.408 CC lib/util/iov.o 00:04:21.408 CC lib/util/math.o 00:04:21.408 CC lib/util/pipe.o 00:04:21.408 CC lib/util/strerror_tls.o 00:04:21.408 LIB libspdk_ioat.a 00:04:21.408 CC lib/util/string.o 00:04:21.408 CC lib/util/uuid.o 00:04:21.408 SO libspdk_ioat.so.7.0 00:04:21.408 CC lib/util/fd_group.o 00:04:21.408 SYMLINK libspdk_ioat.so 00:04:21.408 LIB libspdk_vfio_user.a 00:04:21.408 CC lib/util/xor.o 00:04:21.408 CC lib/util/zipf.o 00:04:21.408 SO libspdk_vfio_user.so.5.0 00:04:21.408 SYMLINK libspdk_vfio_user.so 00:04:21.975 LIB libspdk_util.a 00:04:21.975 SO libspdk_util.so.9.0 00:04:21.975 LIB libspdk_trace_parser.a 00:04:21.975 SO libspdk_trace_parser.so.5.0 00:04:22.233 SYMLINK libspdk_util.so 00:04:22.233 SYMLINK libspdk_trace_parser.so 00:04:22.233 CC lib/rdma/common.o 00:04:22.233 CC lib/rdma/rdma_verbs.o 00:04:22.233 CC lib/conf/conf.o 00:04:22.233 CC lib/vmd/vmd.o 00:04:22.233 CC lib/vmd/led.o 00:04:22.233 CC lib/idxd/idxd.o 00:04:22.233 CC lib/idxd/idxd_user.o 00:04:22.233 CC lib/idxd/idxd_kernel.o 00:04:22.233 CC lib/env_dpdk/env.o 00:04:22.233 CC lib/json/json_parse.o 00:04:22.491 CC lib/env_dpdk/memory.o 00:04:22.491 CC lib/env_dpdk/pci.o 00:04:22.491 CC lib/json/json_util.o 00:04:22.491 CC lib/json/json_write.o 00:04:22.491 CC lib/env_dpdk/init.o 00:04:22.750 LIB libspdk_rdma.a 00:04:22.750 LIB libspdk_conf.a 00:04:22.750 SO libspdk_rdma.so.6.0 00:04:22.750 SO libspdk_conf.so.6.0 00:04:22.750 SYMLINK libspdk_conf.so 00:04:22.750 CC lib/env_dpdk/threads.o 00:04:22.750 SYMLINK libspdk_rdma.so 00:04:22.750 CC lib/env_dpdk/pci_ioat.o 00:04:22.750 CC lib/env_dpdk/pci_virtio.o 00:04:23.015 CC lib/env_dpdk/pci_vmd.o 00:04:23.015 LIB libspdk_json.a 00:04:23.015 CC lib/env_dpdk/pci_idxd.o 00:04:23.015 CC lib/env_dpdk/pci_event.o 00:04:23.015 CC lib/env_dpdk/sigbus_handler.o 00:04:23.015 SO libspdk_json.so.6.0 00:04:23.015 CC lib/env_dpdk/pci_dpdk.o 00:04:23.015 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:23.015 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:23.015 LIB libspdk_idxd.a 00:04:23.015 SYMLINK libspdk_json.so 00:04:23.015 SO libspdk_idxd.so.12.0 00:04:23.272 LIB libspdk_vmd.a 00:04:23.272 SYMLINK libspdk_idxd.so 00:04:23.272 SO libspdk_vmd.so.6.0 00:04:23.272 SYMLINK libspdk_vmd.so 00:04:23.272 CC lib/jsonrpc/jsonrpc_server.o 00:04:23.272 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:23.272 CC lib/jsonrpc/jsonrpc_client.o 00:04:23.272 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:23.530 LIB libspdk_jsonrpc.a 00:04:23.788 SO libspdk_jsonrpc.so.6.0 00:04:23.788 SYMLINK libspdk_jsonrpc.so 00:04:24.045 CC lib/rpc/rpc.o 00:04:24.304 LIB libspdk_env_dpdk.a 00:04:24.304 LIB libspdk_rpc.a 00:04:24.304 SO libspdk_rpc.so.6.0 00:04:24.304 SO libspdk_env_dpdk.so.14.0 00:04:24.304 SYMLINK libspdk_rpc.so 00:04:24.562 SYMLINK libspdk_env_dpdk.so 00:04:24.562 CC lib/notify/notify_rpc.o 00:04:24.562 CC lib/notify/notify.o 00:04:24.562 CC lib/keyring/keyring_rpc.o 00:04:24.562 CC lib/keyring/keyring.o 00:04:24.562 CC lib/trace/trace.o 00:04:24.562 CC lib/trace/trace_flags.o 00:04:24.562 CC lib/trace/trace_rpc.o 00:04:24.820 LIB libspdk_notify.a 00:04:24.820 SO libspdk_notify.so.6.0 00:04:25.077 LIB libspdk_keyring.a 00:04:25.077 SYMLINK libspdk_notify.so 00:04:25.077 LIB libspdk_trace.a 00:04:25.077 SO libspdk_keyring.so.1.0 00:04:25.077 SO libspdk_trace.so.10.0 00:04:25.077 SYMLINK libspdk_keyring.so 00:04:25.077 SYMLINK libspdk_trace.so 00:04:25.335 CC lib/thread/thread.o 00:04:25.335 CC lib/thread/iobuf.o 00:04:25.336 CC lib/sock/sock.o 00:04:25.336 CC lib/sock/sock_rpc.o 00:04:25.902 LIB libspdk_sock.a 00:04:25.902 SO libspdk_sock.so.9.0 00:04:26.160 SYMLINK libspdk_sock.so 00:04:26.419 CC lib/nvme/nvme_fabric.o 00:04:26.419 CC lib/nvme/nvme_ctrlr.o 00:04:26.419 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:26.419 CC lib/nvme/nvme_ns.o 00:04:26.419 CC lib/nvme/nvme_ns_cmd.o 00:04:26.419 CC lib/nvme/nvme_pcie.o 00:04:26.419 CC lib/nvme/nvme_pcie_common.o 00:04:26.419 CC lib/nvme/nvme_qpair.o 00:04:26.419 CC lib/nvme/nvme.o 00:04:27.350 CC lib/nvme/nvme_quirks.o 00:04:27.350 CC lib/nvme/nvme_transport.o 00:04:27.350 CC lib/nvme/nvme_discovery.o 00:04:27.350 LIB libspdk_thread.a 00:04:27.350 SO libspdk_thread.so.10.0 00:04:27.350 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:27.350 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:27.607 CC lib/nvme/nvme_tcp.o 00:04:27.607 CC lib/nvme/nvme_opal.o 00:04:27.607 SYMLINK libspdk_thread.so 00:04:27.607 CC lib/nvme/nvme_io_msg.o 00:04:27.864 CC lib/nvme/nvme_poll_group.o 00:04:27.864 CC lib/nvme/nvme_zns.o 00:04:28.122 CC lib/nvme/nvme_stubs.o 00:04:28.122 CC lib/nvme/nvme_auth.o 00:04:28.122 CC lib/nvme/nvme_cuse.o 00:04:28.122 CC lib/nvme/nvme_rdma.o 00:04:28.453 CC lib/accel/accel.o 00:04:28.453 CC lib/accel/accel_rpc.o 00:04:28.453 CC lib/accel/accel_sw.o 00:04:28.712 CC lib/blob/blobstore.o 00:04:28.712 CC lib/init/json_config.o 00:04:28.712 CC lib/init/subsystem.o 00:04:28.970 CC lib/virtio/virtio.o 00:04:28.970 CC lib/init/subsystem_rpc.o 00:04:28.970 CC lib/init/rpc.o 00:04:29.227 CC lib/blob/request.o 00:04:29.227 CC lib/blob/zeroes.o 00:04:29.227 CC lib/blob/blob_bs_dev.o 00:04:29.227 CC lib/virtio/virtio_vhost_user.o 00:04:29.228 LIB libspdk_init.a 00:04:29.228 CC lib/virtio/virtio_vfio_user.o 00:04:29.228 SO libspdk_init.so.5.0 00:04:29.228 CC lib/virtio/virtio_pci.o 00:04:29.228 SYMLINK libspdk_init.so 00:04:29.485 CC lib/event/app.o 00:04:29.485 CC lib/event/log_rpc.o 00:04:29.485 CC lib/event/reactor.o 00:04:29.485 CC lib/event/app_rpc.o 00:04:29.485 CC lib/event/scheduler_static.o 00:04:29.741 LIB libspdk_accel.a 00:04:29.741 SO libspdk_accel.so.15.0 00:04:29.741 LIB libspdk_virtio.a 00:04:29.741 SO libspdk_virtio.so.7.0 00:04:29.741 SYMLINK libspdk_accel.so 00:04:29.741 SYMLINK libspdk_virtio.so 00:04:29.998 LIB libspdk_nvme.a 00:04:29.998 CC lib/bdev/bdev.o 00:04:29.998 CC lib/bdev/bdev_rpc.o 00:04:29.998 CC lib/bdev/bdev_zone.o 00:04:29.998 CC lib/bdev/scsi_nvme.o 00:04:29.998 CC lib/bdev/part.o 00:04:30.254 LIB libspdk_event.a 00:04:30.254 SO libspdk_nvme.so.13.0 00:04:30.254 SO libspdk_event.so.13.0 00:04:30.254 SYMLINK libspdk_event.so 00:04:30.511 SYMLINK libspdk_nvme.so 00:04:33.036 LIB libspdk_blob.a 00:04:33.294 SO libspdk_blob.so.11.0 00:04:33.294 LIB libspdk_bdev.a 00:04:33.294 SYMLINK libspdk_blob.so 00:04:33.294 SO libspdk_bdev.so.15.0 00:04:33.551 SYMLINK libspdk_bdev.so 00:04:33.551 CC lib/lvol/lvol.o 00:04:33.551 CC lib/blobfs/blobfs.o 00:04:33.551 CC lib/blobfs/tree.o 00:04:33.809 CC lib/scsi/dev.o 00:04:33.809 CC lib/scsi/port.o 00:04:33.809 CC lib/scsi/lun.o 00:04:33.809 CC lib/nvmf/ctrlr.o 00:04:33.809 CC lib/ublk/ublk.o 00:04:33.809 CC lib/nbd/nbd.o 00:04:33.809 CC lib/ftl/ftl_core.o 00:04:33.809 CC lib/ftl/ftl_init.o 00:04:33.809 CC lib/ftl/ftl_layout.o 00:04:34.075 CC lib/ftl/ftl_debug.o 00:04:34.075 CC lib/scsi/scsi.o 00:04:34.075 CC lib/ublk/ublk_rpc.o 00:04:34.075 CC lib/scsi/scsi_bdev.o 00:04:34.075 CC lib/scsi/scsi_pr.o 00:04:34.334 CC lib/scsi/scsi_rpc.o 00:04:34.334 CC lib/ftl/ftl_io.o 00:04:34.334 CC lib/nbd/nbd_rpc.o 00:04:34.334 CC lib/nvmf/ctrlr_discovery.o 00:04:34.334 CC lib/scsi/task.o 00:04:34.334 LIB libspdk_nbd.a 00:04:34.591 SO libspdk_nbd.so.7.0 00:04:34.591 LIB libspdk_ublk.a 00:04:34.591 CC lib/ftl/ftl_sb.o 00:04:34.591 SYMLINK libspdk_nbd.so 00:04:34.591 CC lib/nvmf/ctrlr_bdev.o 00:04:34.591 SO libspdk_ublk.so.3.0 00:04:34.591 CC lib/ftl/ftl_l2p.o 00:04:34.591 CC lib/ftl/ftl_l2p_flat.o 00:04:34.591 SYMLINK libspdk_ublk.so 00:04:34.591 CC lib/ftl/ftl_nv_cache.o 00:04:34.591 LIB libspdk_blobfs.a 00:04:34.848 SO libspdk_blobfs.so.10.0 00:04:34.848 CC lib/ftl/ftl_band.o 00:04:34.848 LIB libspdk_scsi.a 00:04:34.848 SYMLINK libspdk_blobfs.so 00:04:34.848 CC lib/ftl/ftl_band_ops.o 00:04:34.848 CC lib/ftl/ftl_writer.o 00:04:34.848 SO libspdk_scsi.so.9.0 00:04:34.848 CC lib/ftl/ftl_rq.o 00:04:34.848 LIB libspdk_lvol.a 00:04:34.848 SO libspdk_lvol.so.10.0 00:04:34.848 CC lib/nvmf/subsystem.o 00:04:35.108 SYMLINK libspdk_scsi.so 00:04:35.108 CC lib/ftl/ftl_reloc.o 00:04:35.108 SYMLINK libspdk_lvol.so 00:04:35.108 CC lib/ftl/ftl_l2p_cache.o 00:04:35.108 CC lib/ftl/ftl_p2l.o 00:04:35.108 CC lib/ftl/mngt/ftl_mngt.o 00:04:35.108 CC lib/nvmf/nvmf.o 00:04:35.377 CC lib/nvmf/nvmf_rpc.o 00:04:35.377 CC lib/nvmf/transport.o 00:04:35.377 CC lib/nvmf/tcp.o 00:04:35.635 CC lib/nvmf/stubs.o 00:04:35.635 CC lib/nvmf/mdns_server.o 00:04:35.635 CC lib/nvmf/rdma.o 00:04:35.893 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:35.893 CC lib/nvmf/auth.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:36.151 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:36.410 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.410 CC lib/iscsi/conn.o 00:04:36.410 CC lib/iscsi/init_grp.o 00:04:36.410 CC lib/iscsi/iscsi.o 00:04:36.410 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.410 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.410 CC lib/iscsi/md5.o 00:04:36.680 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.680 CC lib/iscsi/param.o 00:04:36.680 CC lib/iscsi/portal_grp.o 00:04:36.680 CC lib/iscsi/tgt_node.o 00:04:36.680 CC lib/iscsi/iscsi_subsystem.o 00:04:36.941 CC lib/iscsi/iscsi_rpc.o 00:04:37.199 CC lib/iscsi/task.o 00:04:37.199 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.199 CC lib/vhost/vhost.o 00:04:37.199 CC lib/vhost/vhost_rpc.o 00:04:37.199 CC lib/vhost/vhost_scsi.o 00:04:37.199 CC lib/ftl/utils/ftl_conf.o 00:04:37.457 CC lib/vhost/vhost_blk.o 00:04:37.457 CC lib/vhost/rte_vhost_user.o 00:04:37.457 CC lib/ftl/utils/ftl_md.o 00:04:37.457 CC lib/ftl/utils/ftl_mempool.o 00:04:37.457 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.715 CC lib/ftl/utils/ftl_property.o 00:04:37.715 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.973 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:37.973 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:37.973 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.973 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.231 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:38.231 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:38.231 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:38.231 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:38.231 LIB libspdk_iscsi.a 00:04:38.231 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:38.489 SO libspdk_iscsi.so.8.0 00:04:38.489 LIB libspdk_nvmf.a 00:04:38.489 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:38.489 CC lib/ftl/base/ftl_base_dev.o 00:04:38.489 CC lib/ftl/base/ftl_base_bdev.o 00:04:38.489 CC lib/ftl/ftl_trace.o 00:04:38.489 SO libspdk_nvmf.so.18.0 00:04:38.489 SYMLINK libspdk_iscsi.so 00:04:38.747 LIB libspdk_vhost.a 00:04:38.747 SO libspdk_vhost.so.8.0 00:04:38.747 LIB libspdk_ftl.a 00:04:38.747 SYMLINK libspdk_vhost.so 00:04:38.747 SYMLINK libspdk_nvmf.so 00:04:39.006 SO libspdk_ftl.so.9.0 00:04:39.572 SYMLINK libspdk_ftl.so 00:04:39.830 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.830 CC module/keyring/linux/keyring.o 00:04:39.830 CC module/blob/bdev/blob_bdev.o 00:04:39.830 CC module/sock/posix/posix.o 00:04:39.830 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.088 CC module/accel/iaa/accel_iaa.o 00:04:40.088 CC module/accel/dsa/accel_dsa.o 00:04:40.088 CC module/keyring/file/keyring.o 00:04:40.088 CC module/accel/error/accel_error.o 00:04:40.088 CC module/accel/ioat/accel_ioat.o 00:04:40.088 LIB libspdk_env_dpdk_rpc.a 00:04:40.088 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.088 CC module/keyring/linux/keyring_rpc.o 00:04:40.088 CC module/keyring/file/keyring_rpc.o 00:04:40.088 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.088 CC module/accel/ioat/accel_ioat_rpc.o 00:04:40.088 CC module/accel/error/accel_error_rpc.o 00:04:40.088 LIB libspdk_scheduler_dynamic.a 00:04:40.088 CC module/accel/dsa/accel_dsa_rpc.o 00:04:40.346 SO libspdk_scheduler_dynamic.so.4.0 00:04:40.346 LIB libspdk_keyring_linux.a 00:04:40.346 CC module/accel/iaa/accel_iaa_rpc.o 00:04:40.346 LIB libspdk_keyring_file.a 00:04:40.346 LIB libspdk_accel_ioat.a 00:04:40.346 SYMLINK libspdk_scheduler_dynamic.so 00:04:40.346 SO libspdk_keyring_linux.so.1.0 00:04:40.346 SO libspdk_keyring_file.so.1.0 00:04:40.346 LIB libspdk_blob_bdev.a 00:04:40.346 SO libspdk_accel_ioat.so.6.0 00:04:40.346 LIB libspdk_accel_error.a 00:04:40.346 SO libspdk_blob_bdev.so.11.0 00:04:40.346 LIB libspdk_accel_dsa.a 00:04:40.346 SO libspdk_accel_error.so.2.0 00:04:40.346 SYMLINK libspdk_keyring_linux.so 00:04:40.346 SYMLINK libspdk_keyring_file.so 00:04:40.346 SYMLINK libspdk_accel_ioat.so 00:04:40.346 SO libspdk_accel_dsa.so.5.0 00:04:40.346 SYMLINK libspdk_blob_bdev.so 00:04:40.346 LIB libspdk_accel_iaa.a 00:04:40.346 SYMLINK libspdk_accel_error.so 00:04:40.346 SO libspdk_accel_iaa.so.3.0 00:04:40.604 SYMLINK libspdk_accel_dsa.so 00:04:40.604 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:40.604 CC module/scheduler/gscheduler/gscheduler.o 00:04:40.604 SYMLINK libspdk_accel_iaa.so 00:04:40.604 LIB libspdk_scheduler_dpdk_governor.a 00:04:40.604 LIB libspdk_scheduler_gscheduler.a 00:04:40.604 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:40.604 SO libspdk_scheduler_gscheduler.so.4.0 00:04:40.604 CC module/bdev/lvol/vbdev_lvol.o 00:04:40.604 CC module/bdev/delay/vbdev_delay.o 00:04:40.604 CC module/bdev/malloc/bdev_malloc.o 00:04:40.862 CC module/bdev/gpt/gpt.o 00:04:40.862 CC module/bdev/error/vbdev_error.o 00:04:40.862 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.862 CC module/bdev/null/bdev_null.o 00:04:40.862 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:40.862 SYMLINK libspdk_scheduler_gscheduler.so 00:04:40.862 CC module/bdev/null/bdev_null_rpc.o 00:04:40.862 CC module/bdev/error/vbdev_error_rpc.o 00:04:40.862 LIB libspdk_sock_posix.a 00:04:40.862 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.862 CC module/bdev/gpt/vbdev_gpt.o 00:04:40.862 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.862 SO libspdk_sock_posix.so.6.0 00:04:41.121 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:41.121 LIB libspdk_bdev_null.a 00:04:41.121 LIB libspdk_bdev_error.a 00:04:41.121 SYMLINK libspdk_sock_posix.so 00:04:41.121 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:41.121 SO libspdk_bdev_null.so.6.0 00:04:41.121 SO libspdk_bdev_error.so.6.0 00:04:41.121 LIB libspdk_blobfs_bdev.a 00:04:41.121 SO libspdk_blobfs_bdev.so.6.0 00:04:41.121 SYMLINK libspdk_bdev_null.so 00:04:41.121 SYMLINK libspdk_bdev_error.so 00:04:41.121 LIB libspdk_bdev_malloc.a 00:04:41.121 LIB libspdk_bdev_delay.a 00:04:41.121 SO libspdk_bdev_malloc.so.6.0 00:04:41.121 SYMLINK libspdk_blobfs_bdev.so 00:04:41.387 SO libspdk_bdev_delay.so.6.0 00:04:41.387 LIB libspdk_bdev_gpt.a 00:04:41.387 SYMLINK libspdk_bdev_malloc.so 00:04:41.387 SO libspdk_bdev_gpt.so.6.0 00:04:41.387 SYMLINK libspdk_bdev_delay.so 00:04:41.387 CC module/bdev/nvme/bdev_nvme.o 00:04:41.387 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.387 CC module/bdev/raid/bdev_raid.o 00:04:41.387 CC module/bdev/split/vbdev_split.o 00:04:41.387 SYMLINK libspdk_bdev_gpt.so 00:04:41.387 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:41.387 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:41.387 CC module/bdev/xnvme/bdev_xnvme.o 00:04:41.644 CC module/bdev/ftl/bdev_ftl.o 00:04:41.644 CC module/bdev/aio/bdev_aio.o 00:04:41.644 LIB libspdk_bdev_lvol.a 00:04:41.644 SO libspdk_bdev_lvol.so.6.0 00:04:41.644 CC module/bdev/split/vbdev_split_rpc.o 00:04:41.644 SYMLINK libspdk_bdev_lvol.so 00:04:41.644 CC module/bdev/raid/bdev_raid_rpc.o 00:04:41.644 LIB libspdk_bdev_passthru.a 00:04:41.644 CC module/bdev/iscsi/bdev_iscsi.o 00:04:41.644 SO libspdk_bdev_passthru.so.6.0 00:04:41.902 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:41.902 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:41.902 SYMLINK libspdk_bdev_passthru.so 00:04:41.902 LIB libspdk_bdev_split.a 00:04:41.902 CC module/bdev/raid/bdev_raid_sb.o 00:04:41.902 SO libspdk_bdev_split.so.6.0 00:04:41.902 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:41.902 CC module/bdev/raid/raid0.o 00:04:41.902 CC module/bdev/aio/bdev_aio_rpc.o 00:04:41.902 SYMLINK libspdk_bdev_split.so 00:04:41.902 CC module/bdev/raid/raid1.o 00:04:41.902 LIB libspdk_bdev_xnvme.a 00:04:41.902 LIB libspdk_bdev_zone_block.a 00:04:41.902 SO libspdk_bdev_xnvme.so.3.0 00:04:42.160 SO libspdk_bdev_zone_block.so.6.0 00:04:42.161 SYMLINK libspdk_bdev_xnvme.so 00:04:42.161 SYMLINK libspdk_bdev_zone_block.so 00:04:42.161 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:42.161 LIB libspdk_bdev_ftl.a 00:04:42.161 LIB libspdk_bdev_aio.a 00:04:42.161 CC module/bdev/raid/concat.o 00:04:42.161 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:42.161 SO libspdk_bdev_ftl.so.6.0 00:04:42.161 SO libspdk_bdev_aio.so.6.0 00:04:42.161 CC module/bdev/nvme/nvme_rpc.o 00:04:42.418 SYMLINK libspdk_bdev_ftl.so 00:04:42.418 CC module/bdev/nvme/bdev_mdns_client.o 00:04:42.418 SYMLINK libspdk_bdev_aio.so 00:04:42.419 CC module/bdev/nvme/vbdev_opal.o 00:04:42.419 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:42.419 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:42.419 LIB libspdk_bdev_iscsi.a 00:04:42.419 SO libspdk_bdev_iscsi.so.6.0 00:04:42.419 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:42.419 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:42.419 SYMLINK libspdk_bdev_iscsi.so 00:04:42.419 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:42.677 LIB libspdk_bdev_raid.a 00:04:42.677 SO libspdk_bdev_raid.so.6.0 00:04:42.677 SYMLINK libspdk_bdev_raid.so 00:04:42.935 LIB libspdk_bdev_virtio.a 00:04:42.935 SO libspdk_bdev_virtio.so.6.0 00:04:43.194 SYMLINK libspdk_bdev_virtio.so 00:04:44.129 LIB libspdk_bdev_nvme.a 00:04:44.386 SO libspdk_bdev_nvme.so.7.0 00:04:44.386 SYMLINK libspdk_bdev_nvme.so 00:04:44.950 CC module/event/subsystems/iobuf/iobuf.o 00:04:44.950 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:44.950 CC module/event/subsystems/sock/sock.o 00:04:44.950 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:44.950 CC module/event/subsystems/vmd/vmd.o 00:04:44.950 CC module/event/subsystems/keyring/keyring.o 00:04:44.950 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:44.950 CC module/event/subsystems/scheduler/scheduler.o 00:04:45.207 LIB libspdk_event_vhost_blk.a 00:04:45.207 LIB libspdk_event_scheduler.a 00:04:45.207 LIB libspdk_event_keyring.a 00:04:45.207 LIB libspdk_event_vmd.a 00:04:45.207 SO libspdk_event_vhost_blk.so.3.0 00:04:45.207 LIB libspdk_event_iobuf.a 00:04:45.207 SO libspdk_event_scheduler.so.4.0 00:04:45.207 SO libspdk_event_keyring.so.1.0 00:04:45.207 SO libspdk_event_vmd.so.6.0 00:04:45.207 LIB libspdk_event_sock.a 00:04:45.207 SO libspdk_event_iobuf.so.3.0 00:04:45.207 SYMLINK libspdk_event_scheduler.so 00:04:45.207 SO libspdk_event_sock.so.5.0 00:04:45.207 SYMLINK libspdk_event_vhost_blk.so 00:04:45.207 SYMLINK libspdk_event_keyring.so 00:04:45.207 SYMLINK libspdk_event_vmd.so 00:04:45.207 SYMLINK libspdk_event_iobuf.so 00:04:45.464 SYMLINK libspdk_event_sock.so 00:04:45.720 CC module/event/subsystems/accel/accel.o 00:04:45.720 LIB libspdk_event_accel.a 00:04:45.720 SO libspdk_event_accel.so.6.0 00:04:45.977 SYMLINK libspdk_event_accel.so 00:04:46.235 CC module/event/subsystems/bdev/bdev.o 00:04:46.235 LIB libspdk_event_bdev.a 00:04:46.493 SO libspdk_event_bdev.so.6.0 00:04:46.493 SYMLINK libspdk_event_bdev.so 00:04:46.750 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:46.750 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:46.750 CC module/event/subsystems/nbd/nbd.o 00:04:46.750 CC module/event/subsystems/ublk/ublk.o 00:04:46.750 CC module/event/subsystems/scsi/scsi.o 00:04:46.750 LIB libspdk_event_nbd.a 00:04:46.750 SO libspdk_event_nbd.so.6.0 00:04:46.750 LIB libspdk_event_ublk.a 00:04:46.750 LIB libspdk_event_scsi.a 00:04:47.008 SO libspdk_event_ublk.so.3.0 00:04:47.008 SO libspdk_event_scsi.so.6.0 00:04:47.008 SYMLINK libspdk_event_nbd.so 00:04:47.008 LIB libspdk_event_nvmf.a 00:04:47.008 SYMLINK libspdk_event_ublk.so 00:04:47.008 SYMLINK libspdk_event_scsi.so 00:04:47.008 SO libspdk_event_nvmf.so.6.0 00:04:47.008 SYMLINK libspdk_event_nvmf.so 00:04:47.266 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:47.266 CC module/event/subsystems/iscsi/iscsi.o 00:04:47.266 LIB libspdk_event_iscsi.a 00:04:47.266 LIB libspdk_event_vhost_scsi.a 00:04:47.527 SO libspdk_event_iscsi.so.6.0 00:04:47.527 SO libspdk_event_vhost_scsi.so.3.0 00:04:47.527 SYMLINK libspdk_event_iscsi.so 00:04:47.527 SYMLINK libspdk_event_vhost_scsi.so 00:04:47.527 SO libspdk.so.6.0 00:04:47.527 SYMLINK libspdk.so 00:04:47.789 CXX app/trace/trace.o 00:04:48.048 CC app/trace_record/trace_record.o 00:04:48.048 TEST_HEADER include/spdk/accel.h 00:04:48.048 TEST_HEADER include/spdk/accel_module.h 00:04:48.048 TEST_HEADER include/spdk/assert.h 00:04:48.048 TEST_HEADER include/spdk/barrier.h 00:04:48.048 TEST_HEADER include/spdk/base64.h 00:04:48.048 TEST_HEADER include/spdk/bdev.h 00:04:48.048 TEST_HEADER include/spdk/bdev_module.h 00:04:48.048 TEST_HEADER include/spdk/bdev_zone.h 00:04:48.048 TEST_HEADER include/spdk/bit_array.h 00:04:48.048 TEST_HEADER include/spdk/bit_pool.h 00:04:48.048 TEST_HEADER include/spdk/blob_bdev.h 00:04:48.048 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:48.048 TEST_HEADER include/spdk/blobfs.h 00:04:48.048 TEST_HEADER include/spdk/blob.h 00:04:48.048 TEST_HEADER include/spdk/conf.h 00:04:48.048 TEST_HEADER include/spdk/config.h 00:04:48.048 TEST_HEADER include/spdk/cpuset.h 00:04:48.048 TEST_HEADER include/spdk/crc16.h 00:04:48.048 TEST_HEADER include/spdk/crc32.h 00:04:48.048 TEST_HEADER include/spdk/crc64.h 00:04:48.048 TEST_HEADER include/spdk/dif.h 00:04:48.048 CC app/nvmf_tgt/nvmf_main.o 00:04:48.048 TEST_HEADER include/spdk/dma.h 00:04:48.048 TEST_HEADER include/spdk/endian.h 00:04:48.048 CC app/iscsi_tgt/iscsi_tgt.o 00:04:48.048 TEST_HEADER include/spdk/env_dpdk.h 00:04:48.048 TEST_HEADER include/spdk/env.h 00:04:48.048 TEST_HEADER include/spdk/event.h 00:04:48.048 TEST_HEADER include/spdk/fd_group.h 00:04:48.048 TEST_HEADER include/spdk/fd.h 00:04:48.048 TEST_HEADER include/spdk/file.h 00:04:48.048 TEST_HEADER include/spdk/ftl.h 00:04:48.048 TEST_HEADER include/spdk/gpt_spec.h 00:04:48.048 TEST_HEADER include/spdk/hexlify.h 00:04:48.048 TEST_HEADER include/spdk/histogram_data.h 00:04:48.048 TEST_HEADER include/spdk/idxd.h 00:04:48.048 TEST_HEADER include/spdk/idxd_spec.h 00:04:48.049 CC examples/accel/perf/accel_perf.o 00:04:48.049 TEST_HEADER include/spdk/init.h 00:04:48.049 TEST_HEADER include/spdk/ioat.h 00:04:48.049 TEST_HEADER include/spdk/ioat_spec.h 00:04:48.049 TEST_HEADER include/spdk/iscsi_spec.h 00:04:48.049 TEST_HEADER include/spdk/json.h 00:04:48.049 TEST_HEADER include/spdk/jsonrpc.h 00:04:48.049 TEST_HEADER include/spdk/keyring.h 00:04:48.049 TEST_HEADER include/spdk/keyring_module.h 00:04:48.049 TEST_HEADER include/spdk/likely.h 00:04:48.049 TEST_HEADER include/spdk/log.h 00:04:48.049 TEST_HEADER include/spdk/lvol.h 00:04:48.049 TEST_HEADER include/spdk/memory.h 00:04:48.049 TEST_HEADER include/spdk/mmio.h 00:04:48.049 TEST_HEADER include/spdk/nbd.h 00:04:48.049 TEST_HEADER include/spdk/notify.h 00:04:48.049 TEST_HEADER include/spdk/nvme.h 00:04:48.049 CC test/blobfs/mkfs/mkfs.o 00:04:48.049 TEST_HEADER include/spdk/nvme_intel.h 00:04:48.049 CC test/app/bdev_svc/bdev_svc.o 00:04:48.049 CC test/bdev/bdevio/bdevio.o 00:04:48.049 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:48.049 CC test/accel/dif/dif.o 00:04:48.049 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:48.049 TEST_HEADER include/spdk/nvme_spec.h 00:04:48.049 TEST_HEADER include/spdk/nvme_zns.h 00:04:48.049 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:48.049 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:48.049 TEST_HEADER include/spdk/nvmf.h 00:04:48.049 TEST_HEADER include/spdk/nvmf_spec.h 00:04:48.049 TEST_HEADER include/spdk/nvmf_transport.h 00:04:48.049 TEST_HEADER include/spdk/opal.h 00:04:48.049 TEST_HEADER include/spdk/opal_spec.h 00:04:48.049 TEST_HEADER include/spdk/pci_ids.h 00:04:48.049 TEST_HEADER include/spdk/pipe.h 00:04:48.049 TEST_HEADER include/spdk/queue.h 00:04:48.049 TEST_HEADER include/spdk/reduce.h 00:04:48.049 TEST_HEADER include/spdk/rpc.h 00:04:48.049 TEST_HEADER include/spdk/scheduler.h 00:04:48.049 TEST_HEADER include/spdk/scsi.h 00:04:48.049 TEST_HEADER include/spdk/scsi_spec.h 00:04:48.049 TEST_HEADER include/spdk/sock.h 00:04:48.049 TEST_HEADER include/spdk/stdinc.h 00:04:48.049 TEST_HEADER include/spdk/string.h 00:04:48.049 TEST_HEADER include/spdk/thread.h 00:04:48.049 TEST_HEADER include/spdk/trace.h 00:04:48.049 TEST_HEADER include/spdk/trace_parser.h 00:04:48.049 TEST_HEADER include/spdk/tree.h 00:04:48.049 TEST_HEADER include/spdk/ublk.h 00:04:48.049 TEST_HEADER include/spdk/util.h 00:04:48.049 TEST_HEADER include/spdk/uuid.h 00:04:48.049 TEST_HEADER include/spdk/version.h 00:04:48.049 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:48.049 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:48.049 TEST_HEADER include/spdk/vhost.h 00:04:48.049 TEST_HEADER include/spdk/vmd.h 00:04:48.049 TEST_HEADER include/spdk/xor.h 00:04:48.049 TEST_HEADER include/spdk/zipf.h 00:04:48.049 CXX test/cpp_headers/accel.o 00:04:48.306 LINK nvmf_tgt 00:04:48.306 LINK iscsi_tgt 00:04:48.306 LINK bdev_svc 00:04:48.306 LINK spdk_trace_record 00:04:48.306 LINK mkfs 00:04:48.306 CXX test/cpp_headers/accel_module.o 00:04:48.306 LINK spdk_trace 00:04:48.564 CC test/app/histogram_perf/histogram_perf.o 00:04:48.564 CXX test/cpp_headers/assert.o 00:04:48.564 CC test/app/jsoncat/jsoncat.o 00:04:48.564 LINK bdevio 00:04:48.564 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:48.564 LINK accel_perf 00:04:48.564 CC app/spdk_tgt/spdk_tgt.o 00:04:48.564 LINK dif 00:04:48.821 LINK histogram_perf 00:04:48.821 CXX test/cpp_headers/barrier.o 00:04:48.821 LINK jsoncat 00:04:48.821 CC test/dma/test_dma/test_dma.o 00:04:48.821 LINK spdk_tgt 00:04:48.821 CXX test/cpp_headers/base64.o 00:04:48.821 CC test/env/mem_callbacks/mem_callbacks.o 00:04:49.078 CC test/event/event_perf/event_perf.o 00:04:49.078 CC test/event/reactor/reactor.o 00:04:49.078 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.078 CXX test/cpp_headers/bdev.o 00:04:49.078 CC test/nvme/aer/aer.o 00:04:49.078 LINK nvme_fuzz 00:04:49.078 LINK mem_callbacks 00:04:49.078 CC test/lvol/esnap/esnap.o 00:04:49.078 LINK event_perf 00:04:49.078 LINK reactor 00:04:49.336 CC app/spdk_lspci/spdk_lspci.o 00:04:49.336 LINK test_dma 00:04:49.336 CXX test/cpp_headers/bdev_module.o 00:04:49.336 LINK hello_bdev 00:04:49.336 CC test/env/vtophys/vtophys.o 00:04:49.336 LINK spdk_lspci 00:04:49.336 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:49.336 LINK aer 00:04:49.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:49.336 CC test/event/reactor_perf/reactor_perf.o 00:04:49.593 LINK vtophys 00:04:49.593 CXX test/cpp_headers/bdev_zone.o 00:04:49.593 CC test/app/stub/stub.o 00:04:49.593 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:49.593 LINK reactor_perf 00:04:49.593 CC app/spdk_nvme_perf/perf.o 00:04:49.593 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.593 CC test/nvme/reset/reset.o 00:04:49.850 CXX test/cpp_headers/bit_array.o 00:04:49.850 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:49.850 LINK stub 00:04:49.850 CXX test/cpp_headers/bit_pool.o 00:04:49.850 CC test/event/app_repeat/app_repeat.o 00:04:49.850 LINK env_dpdk_post_init 00:04:50.108 LINK reset 00:04:50.108 CC test/event/scheduler/scheduler.o 00:04:50.108 LINK vhost_fuzz 00:04:50.108 CXX test/cpp_headers/blob_bdev.o 00:04:50.108 LINK app_repeat 00:04:50.365 CC test/env/memory/memory_ut.o 00:04:50.365 CC test/nvme/sgl/sgl.o 00:04:50.365 CXX test/cpp_headers/blobfs_bdev.o 00:04:50.365 LINK scheduler 00:04:50.365 CC test/nvme/e2edp/nvme_dp.o 00:04:50.365 CC test/rpc_client/rpc_client_test.o 00:04:50.644 CXX test/cpp_headers/blobfs.o 00:04:50.644 LINK sgl 00:04:50.644 LINK bdevperf 00:04:50.644 CC test/env/pci/pci_ut.o 00:04:50.644 LINK rpc_client_test 00:04:50.644 CXX test/cpp_headers/blob.o 00:04:50.902 LINK spdk_nvme_perf 00:04:50.902 LINK nvme_dp 00:04:50.902 CC test/nvme/overhead/overhead.o 00:04:50.902 CXX test/cpp_headers/conf.o 00:04:50.902 CC test/nvme/err_injection/err_injection.o 00:04:50.902 CC app/spdk_nvme_identify/identify.o 00:04:51.160 CC examples/blob/hello_world/hello_blob.o 00:04:51.160 CC test/nvme/startup/startup.o 00:04:51.160 CXX test/cpp_headers/config.o 00:04:51.160 LINK memory_ut 00:04:51.160 LINK pci_ut 00:04:51.160 CXX test/cpp_headers/cpuset.o 00:04:51.160 LINK overhead 00:04:51.160 LINK err_injection 00:04:51.160 LINK startup 00:04:51.418 LINK hello_blob 00:04:51.418 CXX test/cpp_headers/crc16.o 00:04:51.418 CC test/nvme/reserve/reserve.o 00:04:51.418 CXX test/cpp_headers/crc32.o 00:04:51.418 CC test/nvme/simple_copy/simple_copy.o 00:04:51.418 LINK iscsi_fuzz 00:04:51.418 CC app/spdk_nvme_discover/discovery_aer.o 00:04:51.675 CC examples/blob/cli/blobcli.o 00:04:51.675 CC test/thread/poller_perf/poller_perf.o 00:04:51.675 CXX test/cpp_headers/crc64.o 00:04:51.675 CC examples/ioat/perf/perf.o 00:04:51.675 LINK reserve 00:04:51.675 LINK spdk_nvme_discover 00:04:51.675 LINK poller_perf 00:04:51.675 LINK simple_copy 00:04:51.933 CXX test/cpp_headers/dif.o 00:04:51.933 CC test/nvme/connect_stress/connect_stress.o 00:04:51.933 LINK ioat_perf 00:04:51.933 CXX test/cpp_headers/dma.o 00:04:51.933 CC examples/ioat/verify/verify.o 00:04:51.933 CC app/spdk_top/spdk_top.o 00:04:51.933 LINK spdk_nvme_identify 00:04:51.933 CXX test/cpp_headers/endian.o 00:04:52.191 LINK connect_stress 00:04:52.191 CC examples/nvme/hello_world/hello_world.o 00:04:52.191 LINK blobcli 00:04:52.191 LINK verify 00:04:52.191 CXX test/cpp_headers/env_dpdk.o 00:04:52.191 CC test/nvme/boot_partition/boot_partition.o 00:04:52.448 CC examples/sock/hello_world/hello_sock.o 00:04:52.448 LINK hello_world 00:04:52.448 CXX test/cpp_headers/env.o 00:04:52.448 LINK boot_partition 00:04:52.448 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.448 CC examples/vmd/led/led.o 00:04:52.448 CC examples/nvmf/nvmf/nvmf.o 00:04:52.707 CXX test/cpp_headers/event.o 00:04:52.707 LINK lsvmd 00:04:52.707 CC examples/util/zipf/zipf.o 00:04:52.707 CC examples/nvme/reconnect/reconnect.o 00:04:52.707 LINK led 00:04:52.707 LINK hello_sock 00:04:52.707 CC test/nvme/compliance/nvme_compliance.o 00:04:52.707 CXX test/cpp_headers/fd_group.o 00:04:52.707 LINK zipf 00:04:52.965 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:52.965 LINK nvmf 00:04:52.965 CXX test/cpp_headers/fd.o 00:04:52.965 CC app/vhost/vhost.o 00:04:52.965 CC app/spdk_dd/spdk_dd.o 00:04:52.965 LINK reconnect 00:04:53.223 LINK spdk_top 00:04:53.223 LINK nvme_compliance 00:04:53.223 CXX test/cpp_headers/file.o 00:04:53.223 LINK vhost 00:04:53.223 CC examples/thread/thread/thread_ex.o 00:04:53.223 CC examples/nvme/arbitration/arbitration.o 00:04:53.480 CXX test/cpp_headers/ftl.o 00:04:53.480 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.480 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:53.480 LINK spdk_dd 00:04:53.480 LINK thread 00:04:53.480 CC examples/idxd/perf/perf.o 00:04:53.480 LINK nvme_manage 00:04:53.480 CXX test/cpp_headers/gpt_spec.o 00:04:53.480 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.738 LINK doorbell_aers 00:04:53.738 LINK fused_ordering 00:04:53.738 LINK arbitration 00:04:53.738 CXX test/cpp_headers/hexlify.o 00:04:53.738 CXX test/cpp_headers/histogram_data.o 00:04:53.739 LINK interrupt_tgt 00:04:53.739 CC examples/nvme/hotplug/hotplug.o 00:04:53.739 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:53.739 CXX test/cpp_headers/idxd.o 00:04:53.739 CC app/fio/nvme/fio_plugin.o 00:04:53.997 LINK idxd_perf 00:04:53.997 CC test/nvme/fdp/fdp.o 00:04:53.997 CXX test/cpp_headers/idxd_spec.o 00:04:53.997 LINK cmb_copy 00:04:53.997 CC test/nvme/cuse/cuse.o 00:04:53.997 CC examples/nvme/abort/abort.o 00:04:53.997 LINK hotplug 00:04:53.997 CC app/fio/bdev/fio_plugin.o 00:04:54.255 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:54.255 CXX test/cpp_headers/init.o 00:04:54.255 CXX test/cpp_headers/ioat.o 00:04:54.255 CXX test/cpp_headers/ioat_spec.o 00:04:54.255 LINK pmr_persistence 00:04:54.255 CXX test/cpp_headers/iscsi_spec.o 00:04:54.255 LINK fdp 00:04:54.255 CXX test/cpp_headers/json.o 00:04:54.513 CXX test/cpp_headers/jsonrpc.o 00:04:54.513 CXX test/cpp_headers/keyring.o 00:04:54.513 LINK abort 00:04:54.513 CXX test/cpp_headers/keyring_module.o 00:04:54.513 CXX test/cpp_headers/likely.o 00:04:54.513 CXX test/cpp_headers/log.o 00:04:54.513 LINK spdk_nvme 00:04:54.513 CXX test/cpp_headers/lvol.o 00:04:54.771 CXX test/cpp_headers/memory.o 00:04:54.771 LINK spdk_bdev 00:04:54.771 CXX test/cpp_headers/mmio.o 00:04:54.771 CXX test/cpp_headers/nbd.o 00:04:54.771 CXX test/cpp_headers/notify.o 00:04:54.771 CXX test/cpp_headers/nvme.o 00:04:54.771 CXX test/cpp_headers/nvme_intel.o 00:04:54.771 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.771 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.771 CXX test/cpp_headers/nvme_spec.o 00:04:54.771 CXX test/cpp_headers/nvme_zns.o 00:04:54.771 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.771 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:55.029 CXX test/cpp_headers/nvmf.o 00:04:55.029 CXX test/cpp_headers/nvmf_spec.o 00:04:55.029 CXX test/cpp_headers/nvmf_transport.o 00:04:55.029 CXX test/cpp_headers/opal.o 00:04:55.029 CXX test/cpp_headers/opal_spec.o 00:04:55.029 CXX test/cpp_headers/pci_ids.o 00:04:55.029 CXX test/cpp_headers/pipe.o 00:04:55.029 CXX test/cpp_headers/queue.o 00:04:55.029 CXX test/cpp_headers/reduce.o 00:04:55.029 CXX test/cpp_headers/rpc.o 00:04:55.029 CXX test/cpp_headers/scheduler.o 00:04:55.288 CXX test/cpp_headers/scsi.o 00:04:55.288 CXX test/cpp_headers/scsi_spec.o 00:04:55.288 CXX test/cpp_headers/sock.o 00:04:55.288 CXX test/cpp_headers/stdinc.o 00:04:55.288 CXX test/cpp_headers/string.o 00:04:55.288 CXX test/cpp_headers/thread.o 00:04:55.288 CXX test/cpp_headers/trace.o 00:04:55.288 CXX test/cpp_headers/trace_parser.o 00:04:55.288 CXX test/cpp_headers/tree.o 00:04:55.288 CXX test/cpp_headers/ublk.o 00:04:55.547 CXX test/cpp_headers/util.o 00:04:55.547 CXX test/cpp_headers/uuid.o 00:04:55.547 CXX test/cpp_headers/version.o 00:04:55.547 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.547 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.547 CXX test/cpp_headers/vhost.o 00:04:55.547 CXX test/cpp_headers/vmd.o 00:04:55.547 CXX test/cpp_headers/xor.o 00:04:55.547 LINK cuse 00:04:55.547 CXX test/cpp_headers/zipf.o 00:04:55.805 LINK esnap 00:04:56.373 00:04:56.373 real 1m3.286s 00:04:56.373 user 5m50.651s 00:04:56.373 sys 1m16.789s 00:04:56.373 13:03:52 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:56.373 13:03:52 make -- common/autotest_common.sh@10 -- $ set +x 00:04:56.373 ************************************ 00:04:56.373 END TEST make 00:04:56.373 ************************************ 00:04:56.373 13:03:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.373 13:03:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.373 13:03:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.373 13:03:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.373 13:03:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.373 13:03:52 -- pm/common@44 -- $ pid=6080 00:04:56.373 13:03:52 -- pm/common@50 -- $ kill -TERM 6080 00:04:56.373 13:03:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.373 13:03:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.373 13:03:52 -- pm/common@44 -- $ pid=6082 00:04:56.373 13:03:52 -- pm/common@50 -- $ kill -TERM 6082 00:04:56.373 13:03:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.373 13:03:53 -- nvmf/common.sh@7 -- # uname -s 00:04:56.373 13:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.373 13:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.373 13:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.373 13:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.373 13:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.373 13:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.373 13:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.373 13:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.373 13:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.373 13:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.373 13:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:04:56.373 13:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:04:56.373 13:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.373 13:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.373 13:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.373 13:03:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.373 13:03:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.373 13:03:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.373 13:03:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.373 13:03:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.373 13:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.373 13:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.373 13:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.373 13:03:53 -- paths/export.sh@5 -- # export PATH 00:04:56.373 13:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.373 13:03:53 -- nvmf/common.sh@47 -- # : 0 00:04:56.373 13:03:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.373 13:03:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.373 13:03:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.373 13:03:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.373 13:03:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.373 13:03:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.373 13:03:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.373 13:03:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.373 13:03:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.373 13:03:53 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.373 13:03:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.373 13:03:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:56.373 13:03:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.373 13:03:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.373 13:03:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.373 13:03:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.373 13:03:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.373 13:03:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:56.373 13:03:53 -- spdk/autotest.sh@48 -- # udevadm_pid=65948 00:04:56.373 13:03:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.373 13:03:53 -- pm/common@17 -- # local monitor 00:04:56.373 13:03:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.373 13:03:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:56.373 13:03:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.373 13:03:53 -- pm/common@25 -- # sleep 1 00:04:56.373 13:03:53 -- pm/common@21 -- # date +%s 00:04:56.373 13:03:53 -- pm/common@21 -- # date +%s 00:04:56.373 13:03:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721048633 00:04:56.373 13:03:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721048633 00:04:56.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721048633_collect-vmstat.pm.log 00:04:56.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721048633_collect-cpu-load.pm.log 00:04:57.563 13:03:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:57.563 13:03:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:57.563 13:03:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.563 13:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.563 13:03:54 -- spdk/autotest.sh@59 -- # create_test_list 00:04:57.563 13:03:54 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:57.563 13:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.563 13:03:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:57.563 13:03:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:57.563 13:03:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:57.563 13:03:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:57.563 13:03:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.563 13:03:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:57.563 13:03:54 -- common/autotest_common.sh@1451 -- # uname 00:04:57.563 13:03:54 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:57.563 13:03:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:57.563 13:03:54 -- common/autotest_common.sh@1471 -- # uname 00:04:57.563 13:03:54 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:57.563 13:03:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:57.563 13:03:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:57.563 13:03:54 -- spdk/autotest.sh@72 -- # hash lcov 00:04:57.563 13:03:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:57.563 13:03:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:57.563 --rc lcov_branch_coverage=1 00:04:57.563 --rc lcov_function_coverage=1 00:04:57.563 --rc genhtml_branch_coverage=1 00:04:57.563 --rc genhtml_function_coverage=1 00:04:57.563 --rc genhtml_legend=1 00:04:57.563 --rc geninfo_all_blocks=1 00:04:57.563 ' 00:04:57.563 13:03:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:57.563 --rc lcov_branch_coverage=1 00:04:57.563 --rc lcov_function_coverage=1 00:04:57.563 --rc genhtml_branch_coverage=1 00:04:57.563 --rc genhtml_function_coverage=1 00:04:57.563 --rc genhtml_legend=1 00:04:57.563 --rc geninfo_all_blocks=1 00:04:57.563 ' 00:04:57.563 13:03:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:57.563 --rc lcov_branch_coverage=1 00:04:57.563 --rc lcov_function_coverage=1 00:04:57.563 --rc genhtml_branch_coverage=1 00:04:57.563 --rc genhtml_function_coverage=1 00:04:57.563 --rc genhtml_legend=1 00:04:57.563 --rc geninfo_all_blocks=1 00:04:57.563 --no-external' 00:04:57.563 13:03:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:57.563 --rc lcov_branch_coverage=1 00:04:57.563 --rc lcov_function_coverage=1 00:04:57.563 --rc genhtml_branch_coverage=1 00:04:57.563 --rc genhtml_function_coverage=1 00:04:57.563 --rc genhtml_legend=1 00:04:57.563 --rc geninfo_all_blocks=1 00:04:57.563 --no-external' 00:04:57.563 13:03:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:57.563 lcov: LCOV version 1.14 00:04:57.563 13:03:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.663 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:25.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:25.629 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:25.630 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:25.630 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:25.888 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:25.888 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:30.084 13:04:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:30.084 13:04:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:30.084 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:05:30.084 13:04:25 -- spdk/autotest.sh@91 -- # rm -f 00:05:30.084 13:04:25 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.365 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.365 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.365 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:30.365 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:30.365 13:04:27 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:30.365 13:04:27 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:30.365 13:04:27 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:30.365 13:04:27 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:30.365 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.365 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.365 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.365 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.365 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.365 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.365 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.365 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:30.365 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.365 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.366 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:30.366 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:30.366 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.366 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:30.366 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:30.366 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.366 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:30.366 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:30.366 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:30.366 13:04:27 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:30.366 13:04:27 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:30.366 13:04:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.366 13:04:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:30.366 13:04:27 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:30.366 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.366 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.366 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:30.366 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:30.366 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.623 No valid GPT data, bailing 00:05:30.623 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.623 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.623 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.623 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.623 1+0 records in 00:05:30.623 1+0 records out 00:05:30.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141587 s, 74.1 MB/s 00:05:30.623 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.623 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.623 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:30.623 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:30.623 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.623 No valid GPT data, bailing 00:05:30.623 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.623 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.623 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.623 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:30.623 1+0 records in 00:05:30.623 1+0 records out 00:05:30.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484587 s, 216 MB/s 00:05:30.623 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.623 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.623 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:30.623 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:30.623 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:30.623 No valid GPT data, bailing 00:05:30.623 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.881 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.881 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:30.881 1+0 records in 00:05:30.881 1+0 records out 00:05:30.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049795 s, 211 MB/s 00:05:30.881 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.881 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.881 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:30.881 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:30.881 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:30.881 No valid GPT data, bailing 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.881 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.881 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:30.881 1+0 records in 00:05:30.881 1+0 records out 00:05:30.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480015 s, 218 MB/s 00:05:30.881 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.881 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.881 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:30.881 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:30.881 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:30.881 No valid GPT data, bailing 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.881 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.881 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:30.881 1+0 records in 00:05:30.881 1+0 records out 00:05:30.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541035 s, 194 MB/s 00:05:30.881 13:04:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.881 13:04:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.881 13:04:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:30.881 13:04:27 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:30.881 13:04:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:30.881 No valid GPT data, bailing 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:30.881 13:04:27 -- scripts/common.sh@391 -- # pt= 00:05:30.881 13:04:27 -- scripts/common.sh@392 -- # return 1 00:05:30.881 13:04:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:30.881 1+0 records in 00:05:30.881 1+0 records out 00:05:30.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044605 s, 235 MB/s 00:05:30.881 13:04:27 -- spdk/autotest.sh@118 -- # sync 00:05:31.139 13:04:27 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.139 13:04:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.139 13:04:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:33.036 13:04:29 -- spdk/autotest.sh@124 -- # uname -s 00:05:33.036 13:04:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:33.036 13:04:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:33.036 13:04:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.036 13:04:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.036 13:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.036 ************************************ 00:05:33.036 START TEST setup.sh 00:05:33.036 ************************************ 00:05:33.036 13:04:29 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:33.036 * Looking for test storage... 00:05:33.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:33.036 13:04:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:33.036 13:04:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:33.036 13:04:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:33.036 13:04:29 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.036 13:04:29 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.036 13:04:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.036 ************************************ 00:05:33.036 START TEST acl 00:05:33.036 ************************************ 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:33.036 * Looking for test storage... 00:05:33.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:33.036 13:04:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:33.036 13:04:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:33.036 13:04:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.036 13:04:29 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.410 13:04:30 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:34.410 13:04:30 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:34.410 13:04:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.410 13:04:30 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:34.410 13:04:30 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.410 13:04:30 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.666 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:34.666 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:34.666 13:04:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.230 Hugepages 00:05:35.230 node hugesize free / total 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.230 00:05:35.230 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:35.230 13:04:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:35.488 13:04:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:35.745 13:04:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:35.745 13:04:32 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.745 13:04:32 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.745 13:04:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 START TEST denied 00:05:35.745 ************************************ 00:05:35.745 13:04:32 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:35.745 13:04:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:35.745 13:04:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:35.745 13:04:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.745 13:04:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.745 13:04:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:37.116 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.116 13:04:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.696 00:05:43.696 real 0m7.253s 00:05:43.696 user 0m0.865s 00:05:43.696 sys 0m1.430s 00:05:43.696 13:04:39 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.696 ************************************ 00:05:43.696 13:04:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:43.696 END TEST denied 00:05:43.696 ************************************ 00:05:43.696 13:04:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:43.696 13:04:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.696 13:04:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.696 13:04:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:43.696 ************************************ 00:05:43.696 START TEST allowed 00:05:43.696 ************************************ 00:05:43.696 13:04:39 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:43.696 13:04:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:43.696 13:04:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:43.696 13:04:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:43.696 13:04:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.696 13:04:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.262 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.262 13:04:40 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.225 00:05:45.225 real 0m2.312s 00:05:45.225 user 0m1.005s 00:05:45.225 sys 0m1.284s 00:05:45.225 13:04:41 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.225 13:04:41 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:45.225 ************************************ 00:05:45.225 END TEST allowed 00:05:45.225 ************************************ 00:05:45.225 00:05:45.225 real 0m12.320s 00:05:45.225 user 0m3.175s 00:05:45.225 sys 0m4.164s 00:05:45.225 13:04:41 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.225 13:04:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:45.225 ************************************ 00:05:45.225 END TEST acl 00:05:45.225 ************************************ 00:05:45.485 13:04:41 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:45.485 13:04:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.485 13:04:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.485 13:04:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:45.485 ************************************ 00:05:45.485 START TEST hugepages 00:05:45.485 ************************************ 00:05:45.485 13:04:41 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:45.485 * Looking for test storage... 00:05:45.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4709872 kB' 'MemAvailable: 7397212 kB' 'Buffers: 2436 kB' 'Cached: 2891308 kB' 'SwapCached: 0 kB' 'Active: 444084 kB' 'Inactive: 2551188 kB' 'Active(anon): 112044 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103100 kB' 'Mapped: 48888 kB' 'Shmem: 10516 kB' 'KReclaimable: 82104 kB' 'Slab: 161396 kB' 'SReclaimable: 82104 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 6444 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.485 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.486 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.487 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:45.487 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:45.487 13:04:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:45.487 13:04:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.487 13:04:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.487 13:04:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.487 ************************************ 00:05:45.487 START TEST default_setup 00:05:45.487 ************************************ 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.487 13:04:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.620 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.620 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.620 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.882 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6813984 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 2891292 kB' 'SwapCached: 0 kB' 'Active: 462092 kB' 'Inactive: 2551196 kB' 'Active(anon): 130052 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121204 kB' 'Mapped: 49076 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160652 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78968 kB' 'KernelStack: 6512 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.882 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.883 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6813984 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 2891292 kB' 'SwapCached: 0 kB' 'Active: 461776 kB' 'Inactive: 2551196 kB' 'Active(anon): 129736 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 49076 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160644 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78960 kB' 'KernelStack: 6464 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.884 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6813984 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 2891292 kB' 'SwapCached: 0 kB' 'Active: 461580 kB' 'Inactive: 2551196 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120656 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160644 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78960 kB' 'KernelStack: 6448 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:46.885 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.886 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.887 nr_hugepages=1024 00:05:46.887 resv_hugepages=0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.887 surplus_hugepages=0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.887 anon_hugepages=0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6813984 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 2891292 kB' 'SwapCached: 0 kB' 'Active: 461580 kB' 'Inactive: 2551196 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120656 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160644 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78960 kB' 'KernelStack: 6448 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:46.887 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.888 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6813984 kB' 'MemUsed: 5427996 kB' 'SwapCached: 0 kB' 'Active: 461848 kB' 'Inactive: 2551196 kB' 'Active(anon): 129808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2893728 kB' 'Mapped: 48888 kB' 'AnonPages: 120956 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81684 kB' 'Slab: 160644 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.889 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.890 node0=1024 expecting 1024 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.890 00:05:46.890 real 0m1.431s 00:05:46.890 user 0m0.633s 00:05:46.890 sys 0m0.773s 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.890 13:04:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:46.890 ************************************ 00:05:46.890 END TEST default_setup 00:05:46.890 ************************************ 00:05:47.149 13:04:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:47.149 13:04:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.149 13:04:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.149 13:04:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.149 ************************************ 00:05:47.149 START TEST per_node_1G_alloc 00:05:47.149 ************************************ 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.149 13:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.670 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.670 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.670 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.670 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.670 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:47.670 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:47.670 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:47.670 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.670 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7854924 kB' 'MemAvailable: 10542084 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461788 kB' 'Inactive: 2551216 kB' 'Active(anon): 129748 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121172 kB' 'Mapped: 49164 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160684 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 79000 kB' 'KernelStack: 6592 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.671 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855796 kB' 'MemAvailable: 10542956 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461576 kB' 'Inactive: 2551216 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160652 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78968 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.672 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.673 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855292 kB' 'MemAvailable: 10542452 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461840 kB' 'Inactive: 2551216 kB' 'Active(anon): 129800 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160640 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78956 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.674 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.675 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:47.676 nr_hugepages=512 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:47.676 resv_hugepages=0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.676 surplus_hugepages=0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.676 anon_hugepages=0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855292 kB' 'MemAvailable: 10542452 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461540 kB' 'Inactive: 2551216 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120880 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160640 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78956 kB' 'KernelStack: 6448 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.676 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.677 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855292 kB' 'MemUsed: 4386688 kB' 'SwapCached: 0 kB' 'Active: 461544 kB' 'Inactive: 2551216 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 2893732 kB' 'Mapped: 48888 kB' 'AnonPages: 120900 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81684 kB' 'Slab: 160636 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.678 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.679 node0=512 expecting 512 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:47.679 00:05:47.679 real 0m0.698s 00:05:47.679 user 0m0.311s 00:05:47.679 sys 0m0.434s 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.679 13:04:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.679 ************************************ 00:05:47.679 END TEST per_node_1G_alloc 00:05:47.679 ************************************ 00:05:47.679 13:04:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:47.679 13:04:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.679 13:04:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.679 13:04:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.680 ************************************ 00:05:47.680 START TEST even_2G_alloc 00:05:47.680 ************************************ 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.680 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.246 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.246 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.246 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.246 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6805820 kB' 'MemAvailable: 9492980 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 462412 kB' 'Inactive: 2551216 kB' 'Active(anon): 130372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121468 kB' 'Mapped: 49068 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160616 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78932 kB' 'KernelStack: 6472 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.246 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.247 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6805568 kB' 'MemAvailable: 9492728 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461944 kB' 'Inactive: 2551216 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121000 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160692 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 79008 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.510 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.511 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6805568 kB' 'MemAvailable: 9492728 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461624 kB' 'Inactive: 2551216 kB' 'Active(anon): 129584 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120724 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160684 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 79000 kB' 'KernelStack: 6464 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.512 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.513 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:48.514 nr_hugepages=1024 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:48.514 resv_hugepages=0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.514 surplus_hugepages=0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.514 anon_hugepages=0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6805568 kB' 'MemAvailable: 9492728 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461892 kB' 'Inactive: 2551216 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120996 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160676 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78992 kB' 'KernelStack: 6480 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.514 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.515 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6805320 kB' 'MemUsed: 5436660 kB' 'SwapCached: 0 kB' 'Active: 461740 kB' 'Inactive: 2551216 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 2893732 kB' 'Mapped: 48888 kB' 'AnonPages: 120796 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81684 kB' 'Slab: 160672 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.516 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.517 node0=1024 expecting 1024 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:48.517 00:05:48.517 real 0m0.729s 00:05:48.517 user 0m0.310s 00:05:48.517 sys 0m0.437s 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.517 13:04:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:48.517 ************************************ 00:05:48.517 END TEST even_2G_alloc 00:05:48.517 ************************************ 00:05:48.517 13:04:45 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:48.517 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.517 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.517 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:48.517 ************************************ 00:05:48.517 START TEST odd_alloc 00:05:48.517 ************************************ 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.517 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.086 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.086 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.086 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.086 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.086 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6800280 kB' 'MemAvailable: 9487440 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 462044 kB' 'Inactive: 2551216 kB' 'Active(anon): 130004 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121384 kB' 'Mapped: 49072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160620 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78936 kB' 'KernelStack: 6440 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.087 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6800280 kB' 'MemAvailable: 9487440 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461680 kB' 'Inactive: 2551216 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120732 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160676 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78992 kB' 'KernelStack: 6464 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.088 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.089 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6800280 kB' 'MemAvailable: 9487440 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461668 kB' 'Inactive: 2551216 kB' 'Active(anon): 129628 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121012 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160676 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78992 kB' 'KernelStack: 6480 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.090 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.091 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.092 nr_hugepages=1025 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:49.092 resv_hugepages=0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.092 surplus_hugepages=0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.092 anon_hugepages=0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6800280 kB' 'MemAvailable: 9487440 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461664 kB' 'Inactive: 2551216 kB' 'Active(anon): 129624 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160672 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78988 kB' 'KernelStack: 6480 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.092 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.353 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6800280 kB' 'MemUsed: 5441700 kB' 'SwapCached: 0 kB' 'Active: 461876 kB' 'Inactive: 2551216 kB' 'Active(anon): 129836 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 2893732 kB' 'Mapped: 48888 kB' 'AnonPages: 120740 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81684 kB' 'Slab: 160652 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.354 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.355 node0=1025 expecting 1025 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:49.355 00:05:49.355 real 0m0.699s 00:05:49.355 user 0m0.324s 00:05:49.355 sys 0m0.428s 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.355 13:04:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:49.355 ************************************ 00:05:49.355 END TEST odd_alloc 00:05:49.355 ************************************ 00:05:49.355 13:04:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:49.355 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.355 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.355 13:04:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:49.355 ************************************ 00:05:49.355 START TEST custom_alloc 00:05:49.355 ************************************ 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.355 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.356 13:04:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.992 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.992 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.992 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.992 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7854832 kB' 'MemAvailable: 10541992 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 462628 kB' 'Inactive: 2551216 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121696 kB' 'Mapped: 48956 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160636 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78952 kB' 'KernelStack: 6480 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.992 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855288 kB' 'MemAvailable: 10542448 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461876 kB' 'Inactive: 2551216 kB' 'Active(anon): 129836 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120952 kB' 'Mapped: 49072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160628 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78944 kB' 'KernelStack: 6464 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.993 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855288 kB' 'MemAvailable: 10542448 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461848 kB' 'Inactive: 2551216 kB' 'Active(anon): 129808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121184 kB' 'Mapped: 49072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160624 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78940 kB' 'KernelStack: 6448 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.994 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.995 nr_hugepages=512 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:49.995 resv_hugepages=0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.995 surplus_hugepages=0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.995 anon_hugepages=0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855288 kB' 'MemAvailable: 10542448 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 461808 kB' 'Inactive: 2551216 kB' 'Active(anon): 129768 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121148 kB' 'Mapped: 49072 kB' 'Shmem: 10476 kB' 'KReclaimable: 81684 kB' 'Slab: 160624 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78940 kB' 'KernelStack: 6432 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.995 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7855288 kB' 'MemUsed: 4386692 kB' 'SwapCached: 0 kB' 'Active: 461796 kB' 'Inactive: 2551216 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2893732 kB' 'Mapped: 49072 kB' 'AnonPages: 121132 kB' 'Shmem: 10476 kB' 'KernelStack: 6500 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81684 kB' 'Slab: 160624 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 78940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.996 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.997 node0=512 expecting 512 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:49.997 00:05:49.997 real 0m0.698s 00:05:49.997 user 0m0.320s 00:05:49.997 sys 0m0.427s 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.997 13:04:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:49.997 ************************************ 00:05:49.997 END TEST custom_alloc 00:05:49.997 ************************************ 00:05:49.997 13:04:46 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:49.997 13:04:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.997 13:04:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.997 13:04:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:49.997 ************************************ 00:05:49.997 START TEST no_shrink_alloc 00:05:49.997 ************************************ 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.997 13:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.563 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.563 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.563 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.563 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809748 kB' 'MemAvailable: 9496908 kB' 'Buffers: 2436 kB' 'Cached: 2891300 kB' 'SwapCached: 0 kB' 'Active: 459092 kB' 'Inactive: 2551220 kB' 'Active(anon): 127052 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118256 kB' 'Mapped: 48268 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160476 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78796 kB' 'KernelStack: 6420 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.563 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809792 kB' 'MemAvailable: 9496952 kB' 'Buffers: 2436 kB' 'Cached: 2891300 kB' 'SwapCached: 0 kB' 'Active: 458992 kB' 'Inactive: 2551220 kB' 'Active(anon): 126952 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118156 kB' 'Mapped: 48528 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160472 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78792 kB' 'KernelStack: 6388 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 338816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.564 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809792 kB' 'MemAvailable: 9496948 kB' 'Buffers: 2436 kB' 'Cached: 2891296 kB' 'SwapCached: 0 kB' 'Active: 458980 kB' 'Inactive: 2551216 kB' 'Active(anon): 126940 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118104 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160464 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78784 kB' 'KernelStack: 6368 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.565 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:50.826 nr_hugepages=1024 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:50.826 resv_hugepages=0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:50.826 surplus_hugepages=0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:50.826 anon_hugepages=0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:50.826 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809540 kB' 'MemAvailable: 9496704 kB' 'Buffers: 2436 kB' 'Cached: 2891304 kB' 'SwapCached: 0 kB' 'Active: 458540 kB' 'Inactive: 2551224 kB' 'Active(anon): 126500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117668 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160456 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78776 kB' 'KernelStack: 6400 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.827 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.828 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6809540 kB' 'MemUsed: 5432440 kB' 'SwapCached: 0 kB' 'Active: 458520 kB' 'Inactive: 2551224 kB' 'Active(anon): 126480 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2893740 kB' 'Mapped: 48152 kB' 'AnonPages: 117904 kB' 'Shmem: 10476 kB' 'KernelStack: 6400 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81680 kB' 'Slab: 160456 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.829 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.830 node0=1024 expecting 1024 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.830 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.351 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.351 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.351 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.351 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.351 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6806376 kB' 'MemAvailable: 9493540 kB' 'Buffers: 2436 kB' 'Cached: 2891304 kB' 'SwapCached: 0 kB' 'Active: 458956 kB' 'Inactive: 2551224 kB' 'Active(anon): 126916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118368 kB' 'Mapped: 48860 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160448 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78768 kB' 'KernelStack: 6500 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.351 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6806124 kB' 'MemAvailable: 9493288 kB' 'Buffers: 2436 kB' 'Cached: 2891304 kB' 'SwapCached: 0 kB' 'Active: 458752 kB' 'Inactive: 2551224 kB' 'Active(anon): 126712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117648 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160440 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78760 kB' 'KernelStack: 6400 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.352 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6806124 kB' 'MemAvailable: 9493288 kB' 'Buffers: 2436 kB' 'Cached: 2891304 kB' 'SwapCached: 0 kB' 'Active: 458744 kB' 'Inactive: 2551224 kB' 'Active(anon): 126704 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117896 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160440 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78760 kB' 'KernelStack: 6400 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.353 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.354 nr_hugepages=1024 00:05:51.354 resv_hugepages=0 00:05:51.354 surplus_hugepages=0 00:05:51.354 anon_hugepages=0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.354 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6806124 kB' 'MemAvailable: 9493288 kB' 'Buffers: 2436 kB' 'Cached: 2891304 kB' 'SwapCached: 0 kB' 'Active: 458716 kB' 'Inactive: 2551224 kB' 'Active(anon): 126676 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117816 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81680 kB' 'Slab: 160440 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78760 kB' 'KernelStack: 6384 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.355 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6806124 kB' 'MemUsed: 5435856 kB' 'SwapCached: 0 kB' 'Active: 458712 kB' 'Inactive: 2551224 kB' 'Active(anon): 126672 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2551224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2893740 kB' 'Mapped: 48152 kB' 'AnonPages: 117812 kB' 'Shmem: 10476 kB' 'KernelStack: 6384 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81680 kB' 'Slab: 160440 kB' 'SReclaimable: 81680 kB' 'SUnreclaim: 78760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.356 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.613 node0=1024 expecting 1024 00:05:51.613 ************************************ 00:05:51.613 END TEST no_shrink_alloc 00:05:51.613 ************************************ 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:51.613 00:05:51.613 real 0m1.431s 00:05:51.613 user 0m0.678s 00:05:51.613 sys 0m0.807s 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.613 13:04:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:51.613 13:04:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:51.613 00:05:51.613 real 0m6.150s 00:05:51.613 user 0m2.752s 00:05:51.613 sys 0m3.575s 00:05:51.613 ************************************ 00:05:51.613 END TEST hugepages 00:05:51.613 ************************************ 00:05:51.613 13:04:48 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.613 13:04:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:51.613 13:04:48 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.613 13:04:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.613 13:04:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.613 13:04:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.613 ************************************ 00:05:51.613 START TEST driver 00:05:51.613 ************************************ 00:05:51.613 13:04:48 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.613 * Looking for test storage... 00:05:51.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:51.613 13:04:48 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:51.613 13:04:48 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.613 13:04:48 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.239 13:04:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:58.239 13:04:54 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.239 13:04:54 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.239 13:04:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 ************************************ 00:05:58.239 START TEST guess_driver 00:05:58.239 ************************************ 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:58.239 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:58.239 Looking for driver=uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:58.239 13:04:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.812 13:04:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.372 00:06:05.372 real 0m7.214s 00:06:05.372 user 0m0.841s 00:06:05.372 sys 0m1.457s 00:06:05.372 13:05:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.372 ************************************ 00:06:05.372 END TEST guess_driver 00:06:05.372 ************************************ 00:06:05.372 13:05:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 ************************************ 00:06:05.372 END TEST driver 00:06:05.372 ************************************ 00:06:05.372 00:06:05.372 real 0m13.258s 00:06:05.372 user 0m1.200s 00:06:05.372 sys 0m2.239s 00:06:05.372 13:05:01 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.372 13:05:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 13:05:01 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:05.372 13:05:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.372 13:05:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.372 13:05:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 ************************************ 00:06:05.372 START TEST devices 00:06:05.372 ************************************ 00:06:05.372 13:05:01 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:05.372 * Looking for test storage... 00:06:05.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:05.372 13:05:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:05.372 13:05:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:05.372 13:05:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:05.372 13:05:01 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:06.310 13:05:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:06.310 No valid GPT data, bailing 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:06.310 No valid GPT data, bailing 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.310 13:05:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.310 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:06.310 13:05:02 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:06.311 13:05:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:06.311 13:05:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:06.311 13:05:02 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:06.311 No valid GPT data, bailing 00:06:06.311 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:06.311 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.311 13:05:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:06.311 13:05:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:06.311 13:05:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:06.311 13:05:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:06.311 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:06.311 13:05:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:06.311 13:05:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:06.570 No valid GPT data, bailing 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:06.570 No valid GPT data, bailing 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:06.570 No valid GPT data, bailing 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:06.570 13:05:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:06.570 13:05:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:06.570 13:05:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:06.570 13:05:03 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.570 13:05:03 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.570 13:05:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:06.570 ************************************ 00:06:06.570 START TEST nvme_mount 00:06:06.570 ************************************ 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:06.570 13:05:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:07.943 Creating new GPT entries in memory. 00:06:07.943 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:07.943 other utilities. 00:06:07.943 13:05:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:07.943 13:05:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:07.943 13:05:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:07.943 13:05:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:07.943 13:05:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:08.877 Creating new GPT entries in memory. 00:06:08.877 The operation has completed successfully. 00:06:08.877 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:08.877 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:08.877 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71652 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.878 13:05:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.135 13:05:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:09.699 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:09.699 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:09.956 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:09.956 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:09.956 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:09.956 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:09.956 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.212 13:05:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.469 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.035 13:05:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.292 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.549 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:12.113 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:12.113 00:06:12.113 real 0m5.494s 00:06:12.113 user 0m1.511s 00:06:12.113 sys 0m1.636s 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.113 ************************************ 00:06:12.113 END TEST nvme_mount 00:06:12.113 ************************************ 00:06:12.113 13:05:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:12.113 13:05:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:12.113 13:05:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.113 13:05:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.113 13:05:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:12.113 ************************************ 00:06:12.113 START TEST dm_mount 00:06:12.113 ************************************ 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:12.113 13:05:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:13.485 Creating new GPT entries in memory. 00:06:13.485 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:13.485 other utilities. 00:06:13.485 13:05:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:13.485 13:05:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:13.485 13:05:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:13.485 13:05:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:13.485 13:05:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:14.418 Creating new GPT entries in memory. 00:06:14.418 The operation has completed successfully. 00:06:14.418 13:05:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:14.418 13:05:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:14.418 13:05:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:14.418 13:05:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:14.418 13:05:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:15.351 The operation has completed successfully. 00:06:15.351 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:15.351 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.351 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72288 00:06:15.351 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:15.351 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:15.352 13:05:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:15.612 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.870 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:15.870 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.870 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:15.870 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.128 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.128 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.384 13:05:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:16.641 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.642 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.900 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.900 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.900 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:16.900 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.156 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:17.156 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:17.414 13:05:13 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:17.414 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:17.414 13:05:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:17.414 13:05:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:17.414 00:06:17.414 real 0m5.182s 00:06:17.414 user 0m0.969s 00:06:17.414 sys 0m1.115s 00:06:17.414 13:05:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.414 13:05:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:17.414 ************************************ 00:06:17.414 END TEST dm_mount 00:06:17.414 ************************************ 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:17.414 13:05:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:17.673 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:17.673 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:17.673 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:17.673 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:17.673 13:05:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:17.673 00:06:17.673 real 0m12.842s 00:06:17.673 user 0m3.406s 00:06:17.673 sys 0m3.676s 00:06:17.673 13:05:14 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.673 13:05:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:17.673 ************************************ 00:06:17.673 END TEST devices 00:06:17.673 ************************************ 00:06:17.673 00:06:17.673 real 0m44.867s 00:06:17.673 user 0m10.631s 00:06:17.673 sys 0m13.844s 00:06:17.673 13:05:14 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.673 13:05:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:17.673 ************************************ 00:06:17.673 END TEST setup.sh 00:06:17.673 ************************************ 00:06:17.930 13:05:14 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:18.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.751 Hugepages 00:06:18.751 node hugesize free / total 00:06:18.751 node0 1048576kB 0 / 0 00:06:18.751 node0 2048kB 2048 / 2048 00:06:18.751 00:06:18.751 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:19.049 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:19.049 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:19.049 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:19.049 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:19.306 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:19.306 13:05:15 -- spdk/autotest.sh@130 -- # uname -s 00:06:19.306 13:05:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:19.306 13:05:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:19.306 13:05:15 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.437 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.437 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.437 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.437 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.437 13:05:17 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:21.808 13:05:18 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:21.808 13:05:18 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:21.808 13:05:18 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:21.808 13:05:18 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:21.808 13:05:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:21.808 13:05:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:21.808 13:05:18 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:21.808 13:05:18 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:21.808 13:05:18 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:21.808 13:05:18 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:21.808 13:05:18 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:21.808 13:05:18 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:22.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.066 Waiting for block devices as requested 00:06:22.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.582 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.849 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:27.849 13:05:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:27.849 13:05:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1553 -- # continue 00:06:27.849 13:05:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:27.849 13:05:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1553 -- # continue 00:06:27.849 13:05:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:27.849 13:05:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1553 -- # continue 00:06:27.849 13:05:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:27.849 13:05:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:27.849 13:05:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:27.849 13:05:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:27.849 13:05:24 -- common/autotest_common.sh@1553 -- # continue 00:06:27.849 13:05:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:27.849 13:05:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.849 13:05:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.849 13:05:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:27.849 13:05:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.849 13:05:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.849 13:05:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.999 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.999 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.999 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.999 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.999 13:05:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:28.999 13:05:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.999 13:05:25 -- common/autotest_common.sh@10 -- # set +x 00:06:29.258 13:05:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:29.258 13:05:25 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:29.258 13:05:25 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:29.258 13:05:25 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:29.258 13:05:25 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:29.258 13:05:25 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:29.258 13:05:25 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:29.258 13:05:25 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:29.258 13:05:25 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.258 13:05:25 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:29.258 13:05:25 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:29.258 13:05:25 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:29.258 13:05:25 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.258 13:05:25 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:29.258 13:05:25 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.258 13:05:25 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:29.258 13:05:25 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.258 13:05:25 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:29.258 13:05:25 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:29.258 13:05:25 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.258 13:05:25 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:29.258 13:05:25 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:29.258 13:05:25 -- common/autotest_common.sh@1589 -- # return 0 00:06:29.258 13:05:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:29.258 13:05:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:29.258 13:05:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:29.258 13:05:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:29.258 13:05:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:29.258 13:05:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:29.258 13:05:25 -- common/autotest_common.sh@10 -- # set +x 00:06:29.258 13:05:25 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:29.258 13:05:25 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:29.258 13:05:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.258 13:05:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.258 13:05:25 -- common/autotest_common.sh@10 -- # set +x 00:06:29.258 ************************************ 00:06:29.258 START TEST env 00:06:29.258 ************************************ 00:06:29.258 13:05:25 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:29.258 * Looking for test storage... 00:06:29.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:29.258 13:05:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:29.258 13:05:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.258 13:05:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.258 13:05:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.258 ************************************ 00:06:29.258 START TEST env_memory 00:06:29.258 ************************************ 00:06:29.258 13:05:25 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:29.517 00:06:29.517 00:06:29.517 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.517 http://cunit.sourceforge.net/ 00:06:29.517 00:06:29.517 00:06:29.517 Suite: memory 00:06:29.517 Test: alloc and free memory map ...[2024-07-15 13:05:26.056829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:29.517 passed 00:06:29.517 Test: mem map translation ...[2024-07-15 13:05:26.118884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:29.517 [2024-07-15 13:05:26.119023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:29.517 [2024-07-15 13:05:26.119218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:29.517 [2024-07-15 13:05:26.119258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:29.517 passed 00:06:29.517 Test: mem map registration ...[2024-07-15 13:05:26.218022] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:29.517 [2024-07-15 13:05:26.218193] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:29.517 passed 00:06:29.777 Test: mem map adjacent registrations ...passed 00:06:29.777 00:06:29.777 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.777 suites 1 1 n/a 0 0 00:06:29.777 tests 4 4 4 0 0 00:06:29.777 asserts 152 152 152 0 n/a 00:06:29.777 00:06:29.777 Elapsed time = 0.345 seconds 00:06:29.777 00:06:29.777 real 0m0.391s 00:06:29.777 user 0m0.357s 00:06:29.777 sys 0m0.027s 00:06:29.777 13:05:26 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.777 13:05:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:29.777 ************************************ 00:06:29.777 END TEST env_memory 00:06:29.777 ************************************ 00:06:29.777 13:05:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:29.777 13:05:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.777 13:05:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.777 13:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.777 ************************************ 00:06:29.777 START TEST env_vtophys 00:06:29.777 ************************************ 00:06:29.777 13:05:26 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:29.777 EAL: lib.eal log level changed from notice to debug 00:06:29.777 EAL: Detected lcore 0 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 1 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 2 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 3 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 4 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 5 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 6 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 7 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 8 as core 0 on socket 0 00:06:29.777 EAL: Detected lcore 9 as core 0 on socket 0 00:06:29.777 EAL: Maximum logical cores by configuration: 128 00:06:29.777 EAL: Detected CPU lcores: 10 00:06:29.777 EAL: Detected NUMA nodes: 1 00:06:29.777 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:29.777 EAL: Detected shared linkage of DPDK 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:29.777 EAL: Registered [vdev] bus. 00:06:29.777 EAL: bus.vdev log level changed from disabled to notice 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:29.777 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:29.777 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:29.777 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:29.777 EAL: No shared files mode enabled, IPC will be disabled 00:06:29.777 EAL: No shared files mode enabled, IPC is disabled 00:06:29.777 EAL: Selected IOVA mode 'PA' 00:06:29.777 EAL: Probing VFIO support... 00:06:29.777 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:29.777 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:29.777 EAL: Ask a virtual area of 0x2e000 bytes 00:06:29.777 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:29.777 EAL: Setting up physically contiguous memory... 00:06:29.777 EAL: Setting maximum number of open files to 524288 00:06:29.777 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:29.777 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:29.777 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.777 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:29.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.777 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.777 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:29.777 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:29.777 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.777 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:29.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.777 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.777 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:29.777 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:29.777 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.777 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:29.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.777 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.777 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:29.777 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:29.777 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.777 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:29.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.777 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.777 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:29.777 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:29.777 EAL: Hugepages will be freed exactly as allocated. 00:06:29.777 EAL: No shared files mode enabled, IPC is disabled 00:06:29.777 EAL: No shared files mode enabled, IPC is disabled 00:06:30.037 EAL: TSC frequency is ~2200000 KHz 00:06:30.037 EAL: Main lcore 0 is ready (tid=7f07640f8a40;cpuset=[0]) 00:06:30.037 EAL: Trying to obtain current memory policy. 00:06:30.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.037 EAL: Restoring previous memory policy: 0 00:06:30.037 EAL: request: mp_malloc_sync 00:06:30.037 EAL: No shared files mode enabled, IPC is disabled 00:06:30.037 EAL: Heap on socket 0 was expanded by 2MB 00:06:30.037 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.037 EAL: No shared files mode enabled, IPC is disabled 00:06:30.037 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:30.037 EAL: Mem event callback 'spdk:(nil)' registered 00:06:30.037 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:30.037 00:06:30.037 00:06:30.037 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.037 http://cunit.sourceforge.net/ 00:06:30.037 00:06:30.037 00:06:30.037 Suite: components_suite 00:06:30.604 Test: vtophys_malloc_test ...passed 00:06:30.604 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 4MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 4MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 6MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 6MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 10MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 10MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 18MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 18MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 34MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 34MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 66MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 66MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.604 EAL: Restoring previous memory policy: 4 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was expanded by 130MB 00:06:30.604 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.604 EAL: request: mp_malloc_sync 00:06:30.604 EAL: No shared files mode enabled, IPC is disabled 00:06:30.604 EAL: Heap on socket 0 was shrunk by 130MB 00:06:30.604 EAL: Trying to obtain current memory policy. 00:06:30.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.863 EAL: Restoring previous memory policy: 4 00:06:30.863 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.863 EAL: request: mp_malloc_sync 00:06:30.863 EAL: No shared files mode enabled, IPC is disabled 00:06:30.863 EAL: Heap on socket 0 was expanded by 258MB 00:06:30.863 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.863 EAL: request: mp_malloc_sync 00:06:30.863 EAL: No shared files mode enabled, IPC is disabled 00:06:30.863 EAL: Heap on socket 0 was shrunk by 258MB 00:06:30.863 EAL: Trying to obtain current memory policy. 00:06:30.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.122 EAL: Restoring previous memory policy: 4 00:06:31.122 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.122 EAL: request: mp_malloc_sync 00:06:31.122 EAL: No shared files mode enabled, IPC is disabled 00:06:31.122 EAL: Heap on socket 0 was expanded by 514MB 00:06:31.122 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.122 EAL: request: mp_malloc_sync 00:06:31.122 EAL: No shared files mode enabled, IPC is disabled 00:06:31.122 EAL: Heap on socket 0 was shrunk by 514MB 00:06:31.122 EAL: Trying to obtain current memory policy. 00:06:31.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.688 EAL: Restoring previous memory policy: 4 00:06:31.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.688 EAL: request: mp_malloc_sync 00:06:31.688 EAL: No shared files mode enabled, IPC is disabled 00:06:31.688 EAL: Heap on socket 0 was expanded by 1026MB 00:06:31.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.946 passed 00:06:31.946 00:06:31.946 EAL: request: mp_malloc_sync 00:06:31.946 EAL: No shared files mode enabled, IPC is disabled 00:06:31.946 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:31.946 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.947 suites 1 1 n/a 0 0 00:06:31.947 tests 2 2 2 0 0 00:06:31.947 asserts 5449 5449 5449 0 n/a 00:06:31.947 00:06:31.947 Elapsed time = 1.921 seconds 00:06:31.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.947 EAL: request: mp_malloc_sync 00:06:31.947 EAL: No shared files mode enabled, IPC is disabled 00:06:31.947 EAL: Heap on socket 0 was shrunk by 2MB 00:06:31.947 EAL: No shared files mode enabled, IPC is disabled 00:06:31.947 EAL: No shared files mode enabled, IPC is disabled 00:06:31.947 EAL: No shared files mode enabled, IPC is disabled 00:06:31.947 00:06:31.947 real 0m2.179s 00:06:31.947 user 0m1.074s 00:06:31.947 sys 0m0.960s 00:06:31.947 13:05:28 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.947 13:05:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:31.947 ************************************ 00:06:31.947 END TEST env_vtophys 00:06:31.947 ************************************ 00:06:31.947 13:05:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:31.947 13:05:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.947 13:05:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.947 13:05:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.947 ************************************ 00:06:31.947 START TEST env_pci 00:06:31.947 ************************************ 00:06:31.947 13:05:28 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:31.947 00:06:31.947 00:06:31.947 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.947 http://cunit.sourceforge.net/ 00:06:31.947 00:06:31.947 00:06:31.947 Suite: pci 00:06:31.947 Test: pci_hook ...[2024-07-15 13:05:28.682944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74061 has claimed it 00:06:32.205 passedEAL: Cannot find device (10000:00:01.0) 00:06:32.205 EAL: Failed to attach device on primary process 00:06:32.205 00:06:32.205 00:06:32.205 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.205 suites 1 1 n/a 0 0 00:06:32.205 tests 1 1 1 0 0 00:06:32.205 asserts 25 25 25 0 n/a 00:06:32.205 00:06:32.205 Elapsed time = 0.007 seconds 00:06:32.205 00:06:32.205 real 0m0.066s 00:06:32.205 user 0m0.028s 00:06:32.205 sys 0m0.038s 00:06:32.205 13:05:28 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.205 ************************************ 00:06:32.205 13:05:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:32.205 END TEST env_pci 00:06:32.205 ************************************ 00:06:32.205 13:05:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.205 13:05:28 env -- env/env.sh@15 -- # uname 00:06:32.205 13:05:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.205 13:05:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.205 13:05:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.205 13:05:28 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:32.205 13:05:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.205 13:05:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.205 ************************************ 00:06:32.205 START TEST env_dpdk_post_init 00:06:32.205 ************************************ 00:06:32.205 13:05:28 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.205 EAL: Detected CPU lcores: 10 00:06:32.205 EAL: Detected NUMA nodes: 1 00:06:32.205 EAL: Detected shared linkage of DPDK 00:06:32.205 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.205 EAL: Selected IOVA mode 'PA' 00:06:32.464 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:32.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:32.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:32.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:32.464 Starting DPDK initialization... 00:06:32.464 Starting SPDK post initialization... 00:06:32.464 SPDK NVMe probe 00:06:32.464 Attaching to 0000:00:10.0 00:06:32.464 Attaching to 0000:00:11.0 00:06:32.464 Attaching to 0000:00:12.0 00:06:32.464 Attaching to 0000:00:13.0 00:06:32.464 Attached to 0000:00:10.0 00:06:32.464 Attached to 0000:00:11.0 00:06:32.464 Attached to 0000:00:13.0 00:06:32.464 Attached to 0000:00:12.0 00:06:32.464 Cleaning up... 00:06:32.464 00:06:32.464 real 0m0.263s 00:06:32.464 user 0m0.079s 00:06:32.464 sys 0m0.087s 00:06:32.464 13:05:29 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.464 13:05:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.464 ************************************ 00:06:32.464 END TEST env_dpdk_post_init 00:06:32.464 ************************************ 00:06:32.464 13:05:29 env -- env/env.sh@26 -- # uname 00:06:32.464 13:05:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:32.464 13:05:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.464 13:05:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.464 13:05:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.464 13:05:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.464 ************************************ 00:06:32.464 START TEST env_mem_callbacks 00:06:32.464 ************************************ 00:06:32.464 13:05:29 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.464 EAL: Detected CPU lcores: 10 00:06:32.464 EAL: Detected NUMA nodes: 1 00:06:32.464 EAL: Detected shared linkage of DPDK 00:06:32.464 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.464 EAL: Selected IOVA mode 'PA' 00:06:32.723 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.723 00:06:32.723 00:06:32.723 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.723 http://cunit.sourceforge.net/ 00:06:32.723 00:06:32.723 00:06:32.723 Suite: memory 00:06:32.723 Test: test ... 00:06:32.723 register 0x200000200000 2097152 00:06:32.723 malloc 3145728 00:06:32.723 register 0x200000400000 4194304 00:06:32.723 buf 0x200000500000 len 3145728 PASSED 00:06:32.723 malloc 64 00:06:32.723 buf 0x2000004fff40 len 64 PASSED 00:06:32.723 malloc 4194304 00:06:32.723 register 0x200000800000 6291456 00:06:32.723 buf 0x200000a00000 len 4194304 PASSED 00:06:32.723 free 0x200000500000 3145728 00:06:32.723 free 0x2000004fff40 64 00:06:32.723 unregister 0x200000400000 4194304 PASSED 00:06:32.723 free 0x200000a00000 4194304 00:06:32.723 unregister 0x200000800000 6291456 PASSED 00:06:32.723 malloc 8388608 00:06:32.723 register 0x200000400000 10485760 00:06:32.723 buf 0x200000600000 len 8388608 PASSED 00:06:32.723 free 0x200000600000 8388608 00:06:32.723 unregister 0x200000400000 10485760 PASSED 00:06:32.723 passed 00:06:32.723 00:06:32.723 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.723 suites 1 1 n/a 0 0 00:06:32.723 tests 1 1 1 0 0 00:06:32.723 asserts 15 15 15 0 n/a 00:06:32.723 00:06:32.723 Elapsed time = 0.011 seconds 00:06:32.723 00:06:32.723 real 0m0.181s 00:06:32.723 user 0m0.024s 00:06:32.723 sys 0m0.056s 00:06:32.723 13:05:29 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.723 13:05:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:32.723 ************************************ 00:06:32.723 END TEST env_mem_callbacks 00:06:32.723 ************************************ 00:06:32.723 00:06:32.723 real 0m3.455s 00:06:32.723 user 0m1.684s 00:06:32.723 sys 0m1.405s 00:06:32.723 13:05:29 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.723 13:05:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.723 ************************************ 00:06:32.723 END TEST env 00:06:32.723 ************************************ 00:06:32.724 13:05:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:32.724 13:05:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.724 13:05:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.724 13:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.724 ************************************ 00:06:32.724 START TEST rpc 00:06:32.724 ************************************ 00:06:32.724 13:05:29 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:32.982 * Looking for test storage... 00:06:32.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:32.982 13:05:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74180 00:06:32.982 13:05:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:32.982 13:05:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.982 13:05:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74180 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@827 -- # '[' -z 74180 ']' 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.982 13:05:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.982 [2024-07-15 13:05:29.611392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:32.982 [2024-07-15 13:05:29.611627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74180 ] 00:06:33.241 [2024-07-15 13:05:29.767295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.241 [2024-07-15 13:05:29.872580] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:33.241 [2024-07-15 13:05:29.872651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74180' to capture a snapshot of events at runtime. 00:06:33.241 [2024-07-15 13:05:29.872672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.241 [2024-07-15 13:05:29.872698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.241 [2024-07-15 13:05:29.872716] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74180 for offline analysis/debug. 00:06:33.241 [2024-07-15 13:05:29.872766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.176 13:05:30 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.176 13:05:30 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:34.176 13:05:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.176 13:05:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.176 13:05:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:34.176 13:05:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:34.176 13:05:30 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.176 13:05:30 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.176 13:05:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 ************************************ 00:06:34.176 START TEST rpc_integrity 00:06:34.176 ************************************ 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.176 { 00:06:34.176 "name": "Malloc0", 00:06:34.176 "aliases": [ 00:06:34.176 "5947c6ff-1e82-44b9-83a2-17a04fc737e9" 00:06:34.176 ], 00:06:34.176 "product_name": "Malloc disk", 00:06:34.176 "block_size": 512, 00:06:34.176 "num_blocks": 16384, 00:06:34.176 "uuid": "5947c6ff-1e82-44b9-83a2-17a04fc737e9", 00:06:34.176 "assigned_rate_limits": { 00:06:34.176 "rw_ios_per_sec": 0, 00:06:34.176 "rw_mbytes_per_sec": 0, 00:06:34.176 "r_mbytes_per_sec": 0, 00:06:34.176 "w_mbytes_per_sec": 0 00:06:34.176 }, 00:06:34.176 "claimed": false, 00:06:34.176 "zoned": false, 00:06:34.176 "supported_io_types": { 00:06:34.176 "read": true, 00:06:34.176 "write": true, 00:06:34.176 "unmap": true, 00:06:34.176 "write_zeroes": true, 00:06:34.176 "flush": true, 00:06:34.176 "reset": true, 00:06:34.176 "compare": false, 00:06:34.176 "compare_and_write": false, 00:06:34.176 "abort": true, 00:06:34.176 "nvme_admin": false, 00:06:34.176 "nvme_io": false 00:06:34.176 }, 00:06:34.176 "memory_domains": [ 00:06:34.176 { 00:06:34.176 "dma_device_id": "system", 00:06:34.176 "dma_device_type": 1 00:06:34.176 }, 00:06:34.176 { 00:06:34.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.176 "dma_device_type": 2 00:06:34.176 } 00:06:34.176 ], 00:06:34.176 "driver_specific": {} 00:06:34.176 } 00:06:34.176 ]' 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 [2024-07-15 13:05:30.720362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:34.176 [2024-07-15 13:05:30.720452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.176 [2024-07-15 13:05:30.720499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:34.176 [2024-07-15 13:05:30.720531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.176 [2024-07-15 13:05:30.723780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.176 [2024-07-15 13:05:30.723840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.176 Passthru0 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.176 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.176 { 00:06:34.176 "name": "Malloc0", 00:06:34.176 "aliases": [ 00:06:34.176 "5947c6ff-1e82-44b9-83a2-17a04fc737e9" 00:06:34.176 ], 00:06:34.176 "product_name": "Malloc disk", 00:06:34.176 "block_size": 512, 00:06:34.176 "num_blocks": 16384, 00:06:34.176 "uuid": "5947c6ff-1e82-44b9-83a2-17a04fc737e9", 00:06:34.176 "assigned_rate_limits": { 00:06:34.176 "rw_ios_per_sec": 0, 00:06:34.176 "rw_mbytes_per_sec": 0, 00:06:34.176 "r_mbytes_per_sec": 0, 00:06:34.176 "w_mbytes_per_sec": 0 00:06:34.176 }, 00:06:34.177 "claimed": true, 00:06:34.177 "claim_type": "exclusive_write", 00:06:34.177 "zoned": false, 00:06:34.177 "supported_io_types": { 00:06:34.177 "read": true, 00:06:34.177 "write": true, 00:06:34.177 "unmap": true, 00:06:34.177 "write_zeroes": true, 00:06:34.177 "flush": true, 00:06:34.177 "reset": true, 00:06:34.177 "compare": false, 00:06:34.177 "compare_and_write": false, 00:06:34.177 "abort": true, 00:06:34.177 "nvme_admin": false, 00:06:34.177 "nvme_io": false 00:06:34.177 }, 00:06:34.177 "memory_domains": [ 00:06:34.177 { 00:06:34.177 "dma_device_id": "system", 00:06:34.177 "dma_device_type": 1 00:06:34.177 }, 00:06:34.177 { 00:06:34.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.177 "dma_device_type": 2 00:06:34.177 } 00:06:34.177 ], 00:06:34.177 "driver_specific": {} 00:06:34.177 }, 00:06:34.177 { 00:06:34.177 "name": "Passthru0", 00:06:34.177 "aliases": [ 00:06:34.177 "e21ab490-dcd4-5cf8-8aec-38d48760f63b" 00:06:34.177 ], 00:06:34.177 "product_name": "passthru", 00:06:34.177 "block_size": 512, 00:06:34.177 "num_blocks": 16384, 00:06:34.177 "uuid": "e21ab490-dcd4-5cf8-8aec-38d48760f63b", 00:06:34.177 "assigned_rate_limits": { 00:06:34.177 "rw_ios_per_sec": 0, 00:06:34.177 "rw_mbytes_per_sec": 0, 00:06:34.177 "r_mbytes_per_sec": 0, 00:06:34.177 "w_mbytes_per_sec": 0 00:06:34.177 }, 00:06:34.177 "claimed": false, 00:06:34.177 "zoned": false, 00:06:34.177 "supported_io_types": { 00:06:34.177 "read": true, 00:06:34.177 "write": true, 00:06:34.177 "unmap": true, 00:06:34.177 "write_zeroes": true, 00:06:34.177 "flush": true, 00:06:34.177 "reset": true, 00:06:34.177 "compare": false, 00:06:34.177 "compare_and_write": false, 00:06:34.177 "abort": true, 00:06:34.177 "nvme_admin": false, 00:06:34.177 "nvme_io": false 00:06:34.177 }, 00:06:34.177 "memory_domains": [ 00:06:34.177 { 00:06:34.177 "dma_device_id": "system", 00:06:34.177 "dma_device_type": 1 00:06:34.177 }, 00:06:34.177 { 00:06:34.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.177 "dma_device_type": 2 00:06:34.177 } 00:06:34.177 ], 00:06:34.177 "driver_specific": { 00:06:34.177 "passthru": { 00:06:34.177 "name": "Passthru0", 00:06:34.177 "base_bdev_name": "Malloc0" 00:06:34.177 } 00:06:34.177 } 00:06:34.177 } 00:06:34.177 ]' 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:34.177 13:05:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:34.177 00:06:34.177 real 0m0.313s 00:06:34.177 user 0m0.212s 00:06:34.177 sys 0m0.031s 00:06:34.177 ************************************ 00:06:34.177 END TEST rpc_integrity 00:06:34.177 ************************************ 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.177 13:05:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:34.436 13:05:30 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.436 13:05:30 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.436 13:05:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 ************************************ 00:06:34.436 START TEST rpc_plugins 00:06:34.436 ************************************ 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:34.436 13:05:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.436 13:05:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:34.436 13:05:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.436 13:05:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:34.436 { 00:06:34.436 "name": "Malloc1", 00:06:34.436 "aliases": [ 00:06:34.436 "1dc34ea4-1789-483e-b3a6-db4a37f2bace" 00:06:34.436 ], 00:06:34.436 "product_name": "Malloc disk", 00:06:34.436 "block_size": 4096, 00:06:34.436 "num_blocks": 256, 00:06:34.436 "uuid": "1dc34ea4-1789-483e-b3a6-db4a37f2bace", 00:06:34.436 "assigned_rate_limits": { 00:06:34.436 "rw_ios_per_sec": 0, 00:06:34.436 "rw_mbytes_per_sec": 0, 00:06:34.436 "r_mbytes_per_sec": 0, 00:06:34.436 "w_mbytes_per_sec": 0 00:06:34.436 }, 00:06:34.436 "claimed": false, 00:06:34.436 "zoned": false, 00:06:34.436 "supported_io_types": { 00:06:34.436 "read": true, 00:06:34.436 "write": true, 00:06:34.436 "unmap": true, 00:06:34.436 "write_zeroes": true, 00:06:34.436 "flush": true, 00:06:34.436 "reset": true, 00:06:34.436 "compare": false, 00:06:34.436 "compare_and_write": false, 00:06:34.436 "abort": true, 00:06:34.436 "nvme_admin": false, 00:06:34.436 "nvme_io": false 00:06:34.436 }, 00:06:34.436 "memory_domains": [ 00:06:34.436 { 00:06:34.436 "dma_device_id": "system", 00:06:34.436 "dma_device_type": 1 00:06:34.436 }, 00:06:34.436 { 00:06:34.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.436 "dma_device_type": 2 00:06:34.436 } 00:06:34.436 ], 00:06:34.436 "driver_specific": {} 00:06:34.436 } 00:06:34.436 ]' 00:06:34.436 13:05:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:34.436 13:05:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:34.436 00:06:34.436 real 0m0.155s 00:06:34.436 user 0m0.100s 00:06:34.436 sys 0m0.021s 00:06:34.436 ************************************ 00:06:34.436 END TEST rpc_plugins 00:06:34.436 ************************************ 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.436 13:05:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:34.436 13:05:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.436 13:05:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.436 13:05:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 ************************************ 00:06:34.436 START TEST rpc_trace_cmd_test 00:06:34.436 ************************************ 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:34.436 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74180", 00:06:34.436 "tpoint_group_mask": "0x8", 00:06:34.436 "iscsi_conn": { 00:06:34.436 "mask": "0x2", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "scsi": { 00:06:34.436 "mask": "0x4", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "bdev": { 00:06:34.436 "mask": "0x8", 00:06:34.436 "tpoint_mask": "0xffffffffffffffff" 00:06:34.436 }, 00:06:34.436 "nvmf_rdma": { 00:06:34.436 "mask": "0x10", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "nvmf_tcp": { 00:06:34.436 "mask": "0x20", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "ftl": { 00:06:34.436 "mask": "0x40", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "blobfs": { 00:06:34.436 "mask": "0x80", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "dsa": { 00:06:34.436 "mask": "0x200", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "thread": { 00:06:34.436 "mask": "0x400", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "nvme_pcie": { 00:06:34.436 "mask": "0x800", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "iaa": { 00:06:34.436 "mask": "0x1000", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "nvme_tcp": { 00:06:34.436 "mask": "0x2000", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "bdev_nvme": { 00:06:34.436 "mask": "0x4000", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 }, 00:06:34.436 "sock": { 00:06:34.436 "mask": "0x8000", 00:06:34.436 "tpoint_mask": "0x0" 00:06:34.436 } 00:06:34.436 }' 00:06:34.436 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:34.695 00:06:34.695 real 0m0.256s 00:06:34.695 user 0m0.223s 00:06:34.695 sys 0m0.023s 00:06:34.695 ************************************ 00:06:34.695 END TEST rpc_trace_cmd_test 00:06:34.695 ************************************ 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.695 13:05:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.695 13:05:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:34.695 13:05:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:34.695 13:05:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:34.695 13:05:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.695 13:05:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.695 13:05:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.953 ************************************ 00:06:34.953 START TEST rpc_daemon_integrity 00:06:34.953 ************************************ 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.953 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.954 { 00:06:34.954 "name": "Malloc2", 00:06:34.954 "aliases": [ 00:06:34.954 "82c5f3c6-2b50-434f-ac07-f19f40adcf27" 00:06:34.954 ], 00:06:34.954 "product_name": "Malloc disk", 00:06:34.954 "block_size": 512, 00:06:34.954 "num_blocks": 16384, 00:06:34.954 "uuid": "82c5f3c6-2b50-434f-ac07-f19f40adcf27", 00:06:34.954 "assigned_rate_limits": { 00:06:34.954 "rw_ios_per_sec": 0, 00:06:34.954 "rw_mbytes_per_sec": 0, 00:06:34.954 "r_mbytes_per_sec": 0, 00:06:34.954 "w_mbytes_per_sec": 0 00:06:34.954 }, 00:06:34.954 "claimed": false, 00:06:34.954 "zoned": false, 00:06:34.954 "supported_io_types": { 00:06:34.954 "read": true, 00:06:34.954 "write": true, 00:06:34.954 "unmap": true, 00:06:34.954 "write_zeroes": true, 00:06:34.954 "flush": true, 00:06:34.954 "reset": true, 00:06:34.954 "compare": false, 00:06:34.954 "compare_and_write": false, 00:06:34.954 "abort": true, 00:06:34.954 "nvme_admin": false, 00:06:34.954 "nvme_io": false 00:06:34.954 }, 00:06:34.954 "memory_domains": [ 00:06:34.954 { 00:06:34.954 "dma_device_id": "system", 00:06:34.954 "dma_device_type": 1 00:06:34.954 }, 00:06:34.954 { 00:06:34.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.954 "dma_device_type": 2 00:06:34.954 } 00:06:34.954 ], 00:06:34.954 "driver_specific": {} 00:06:34.954 } 00:06:34.954 ]' 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 [2024-07-15 13:05:31.584035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:34.954 [2024-07-15 13:05:31.584118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.954 [2024-07-15 13:05:31.584182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:34.954 [2024-07-15 13:05:31.584206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.954 [2024-07-15 13:05:31.587445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.954 [2024-07-15 13:05:31.587538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.954 Passthru0 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.954 { 00:06:34.954 "name": "Malloc2", 00:06:34.954 "aliases": [ 00:06:34.954 "82c5f3c6-2b50-434f-ac07-f19f40adcf27" 00:06:34.954 ], 00:06:34.954 "product_name": "Malloc disk", 00:06:34.954 "block_size": 512, 00:06:34.954 "num_blocks": 16384, 00:06:34.954 "uuid": "82c5f3c6-2b50-434f-ac07-f19f40adcf27", 00:06:34.954 "assigned_rate_limits": { 00:06:34.954 "rw_ios_per_sec": 0, 00:06:34.954 "rw_mbytes_per_sec": 0, 00:06:34.954 "r_mbytes_per_sec": 0, 00:06:34.954 "w_mbytes_per_sec": 0 00:06:34.954 }, 00:06:34.954 "claimed": true, 00:06:34.954 "claim_type": "exclusive_write", 00:06:34.954 "zoned": false, 00:06:34.954 "supported_io_types": { 00:06:34.954 "read": true, 00:06:34.954 "write": true, 00:06:34.954 "unmap": true, 00:06:34.954 "write_zeroes": true, 00:06:34.954 "flush": true, 00:06:34.954 "reset": true, 00:06:34.954 "compare": false, 00:06:34.954 "compare_and_write": false, 00:06:34.954 "abort": true, 00:06:34.954 "nvme_admin": false, 00:06:34.954 "nvme_io": false 00:06:34.954 }, 00:06:34.954 "memory_domains": [ 00:06:34.954 { 00:06:34.954 "dma_device_id": "system", 00:06:34.954 "dma_device_type": 1 00:06:34.954 }, 00:06:34.954 { 00:06:34.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.954 "dma_device_type": 2 00:06:34.954 } 00:06:34.954 ], 00:06:34.954 "driver_specific": {} 00:06:34.954 }, 00:06:34.954 { 00:06:34.954 "name": "Passthru0", 00:06:34.954 "aliases": [ 00:06:34.954 "c08776a6-498c-567e-8899-be62f85c0a74" 00:06:34.954 ], 00:06:34.954 "product_name": "passthru", 00:06:34.954 "block_size": 512, 00:06:34.954 "num_blocks": 16384, 00:06:34.954 "uuid": "c08776a6-498c-567e-8899-be62f85c0a74", 00:06:34.954 "assigned_rate_limits": { 00:06:34.954 "rw_ios_per_sec": 0, 00:06:34.954 "rw_mbytes_per_sec": 0, 00:06:34.954 "r_mbytes_per_sec": 0, 00:06:34.954 "w_mbytes_per_sec": 0 00:06:34.954 }, 00:06:34.954 "claimed": false, 00:06:34.954 "zoned": false, 00:06:34.954 "supported_io_types": { 00:06:34.954 "read": true, 00:06:34.954 "write": true, 00:06:34.954 "unmap": true, 00:06:34.954 "write_zeroes": true, 00:06:34.954 "flush": true, 00:06:34.954 "reset": true, 00:06:34.954 "compare": false, 00:06:34.954 "compare_and_write": false, 00:06:34.954 "abort": true, 00:06:34.954 "nvme_admin": false, 00:06:34.954 "nvme_io": false 00:06:34.954 }, 00:06:34.954 "memory_domains": [ 00:06:34.954 { 00:06:34.954 "dma_device_id": "system", 00:06:34.954 "dma_device_type": 1 00:06:34.954 }, 00:06:34.954 { 00:06:34.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.954 "dma_device_type": 2 00:06:34.954 } 00:06:34.954 ], 00:06:34.954 "driver_specific": { 00:06:34.954 "passthru": { 00:06:34.954 "name": "Passthru0", 00:06:34.954 "base_bdev_name": "Malloc2" 00:06:34.954 } 00:06:34.954 } 00:06:34.954 } 00:06:34.954 ]' 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.954 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:35.213 00:06:35.213 real 0m0.317s 00:06:35.213 user 0m0.218s 00:06:35.213 sys 0m0.033s 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.213 13:05:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.213 ************************************ 00:06:35.213 END TEST rpc_daemon_integrity 00:06:35.213 ************************************ 00:06:35.213 13:05:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:35.213 13:05:31 rpc -- rpc/rpc.sh@84 -- # killprocess 74180 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@946 -- # '[' -z 74180 ']' 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@950 -- # kill -0 74180 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@951 -- # uname 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74180 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.213 killing process with pid 74180 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74180' 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@965 -- # kill 74180 00:06:35.213 13:05:31 rpc -- common/autotest_common.sh@970 -- # wait 74180 00:06:35.780 00:06:35.780 real 0m2.925s 00:06:35.780 user 0m3.664s 00:06:35.780 sys 0m0.777s 00:06:35.780 13:05:32 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.780 ************************************ 00:06:35.780 END TEST rpc 00:06:35.780 ************************************ 00:06:35.780 13:05:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 13:05:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:35.780 13:05:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.780 13:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.780 13:05:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 START TEST skip_rpc 00:06:35.780 ************************************ 00:06:35.780 13:05:32 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:35.780 * Looking for test storage... 00:06:35.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.780 13:05:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:35.780 13:05:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:35.780 13:05:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:35.780 13:05:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.780 13:05:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.780 13:05:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 START TEST skip_rpc 00:06:35.780 ************************************ 00:06:35.780 13:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:35.780 13:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74374 00:06:35.780 13:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.780 13:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:35.780 13:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:36.038 [2024-07-15 13:05:32.589219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:36.038 [2024-07-15 13:05:32.589406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74374 ] 00:06:36.038 [2024-07-15 13:05:32.736877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.297 [2024-07-15 13:05:32.831439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74374 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74374 ']' 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74374 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74374 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.562 killing process with pid 74374 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74374' 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74374 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74374 00:06:41.562 00:06:41.562 real 0m5.511s 00:06:41.562 user 0m5.046s 00:06:41.562 sys 0m0.368s 00:06:41.562 ************************************ 00:06:41.562 END TEST skip_rpc 00:06:41.562 ************************************ 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.562 13:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.562 13:05:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:41.562 13:05:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.562 13:05:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.562 13:05:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.562 ************************************ 00:06:41.562 START TEST skip_rpc_with_json 00:06:41.562 ************************************ 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74461 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74461 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74461 ']' 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.562 13:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.562 [2024-07-15 13:05:38.130906] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.562 [2024-07-15 13:05:38.131106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74461 ] 00:06:41.562 [2024-07-15 13:05:38.275636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.821 [2024-07-15 13:05:38.371204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.387 [2024-07-15 13:05:39.105224] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:42.387 request: 00:06:42.387 { 00:06:42.387 "trtype": "tcp", 00:06:42.387 "method": "nvmf_get_transports", 00:06:42.387 "req_id": 1 00:06:42.387 } 00:06:42.387 Got JSON-RPC error response 00:06:42.387 response: 00:06:42.387 { 00:06:42.387 "code": -19, 00:06:42.387 "message": "No such device" 00:06:42.387 } 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.387 [2024-07-15 13:05:39.117457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.387 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.647 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.647 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:42.647 { 00:06:42.647 "subsystems": [ 00:06:42.647 { 00:06:42.647 "subsystem": "keyring", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "iobuf", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "iobuf_set_options", 00:06:42.647 "params": { 00:06:42.647 "small_pool_count": 8192, 00:06:42.647 "large_pool_count": 1024, 00:06:42.647 "small_bufsize": 8192, 00:06:42.647 "large_bufsize": 135168 00:06:42.647 } 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "sock", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "sock_set_default_impl", 00:06:42.647 "params": { 00:06:42.647 "impl_name": "posix" 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "sock_impl_set_options", 00:06:42.647 "params": { 00:06:42.647 "impl_name": "ssl", 00:06:42.647 "recv_buf_size": 4096, 00:06:42.647 "send_buf_size": 4096, 00:06:42.647 "enable_recv_pipe": true, 00:06:42.647 "enable_quickack": false, 00:06:42.647 "enable_placement_id": 0, 00:06:42.647 "enable_zerocopy_send_server": true, 00:06:42.647 "enable_zerocopy_send_client": false, 00:06:42.647 "zerocopy_threshold": 0, 00:06:42.647 "tls_version": 0, 00:06:42.647 "enable_ktls": false 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "sock_impl_set_options", 00:06:42.647 "params": { 00:06:42.647 "impl_name": "posix", 00:06:42.647 "recv_buf_size": 2097152, 00:06:42.647 "send_buf_size": 2097152, 00:06:42.647 "enable_recv_pipe": true, 00:06:42.647 "enable_quickack": false, 00:06:42.647 "enable_placement_id": 0, 00:06:42.647 "enable_zerocopy_send_server": true, 00:06:42.647 "enable_zerocopy_send_client": false, 00:06:42.647 "zerocopy_threshold": 0, 00:06:42.647 "tls_version": 0, 00:06:42.647 "enable_ktls": false 00:06:42.647 } 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "vmd", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "accel", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "accel_set_options", 00:06:42.647 "params": { 00:06:42.647 "small_cache_size": 128, 00:06:42.647 "large_cache_size": 16, 00:06:42.647 "task_count": 2048, 00:06:42.647 "sequence_count": 2048, 00:06:42.647 "buf_count": 2048 00:06:42.647 } 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "bdev", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "bdev_set_options", 00:06:42.647 "params": { 00:06:42.647 "bdev_io_pool_size": 65535, 00:06:42.647 "bdev_io_cache_size": 256, 00:06:42.647 "bdev_auto_examine": true, 00:06:42.647 "iobuf_small_cache_size": 128, 00:06:42.647 "iobuf_large_cache_size": 16 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "bdev_raid_set_options", 00:06:42.647 "params": { 00:06:42.647 "process_window_size_kb": 1024 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "bdev_iscsi_set_options", 00:06:42.647 "params": { 00:06:42.647 "timeout_sec": 30 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "bdev_nvme_set_options", 00:06:42.647 "params": { 00:06:42.647 "action_on_timeout": "none", 00:06:42.647 "timeout_us": 0, 00:06:42.647 "timeout_admin_us": 0, 00:06:42.647 "keep_alive_timeout_ms": 10000, 00:06:42.647 "arbitration_burst": 0, 00:06:42.647 "low_priority_weight": 0, 00:06:42.647 "medium_priority_weight": 0, 00:06:42.647 "high_priority_weight": 0, 00:06:42.647 "nvme_adminq_poll_period_us": 10000, 00:06:42.647 "nvme_ioq_poll_period_us": 0, 00:06:42.647 "io_queue_requests": 0, 00:06:42.647 "delay_cmd_submit": true, 00:06:42.647 "transport_retry_count": 4, 00:06:42.647 "bdev_retry_count": 3, 00:06:42.647 "transport_ack_timeout": 0, 00:06:42.647 "ctrlr_loss_timeout_sec": 0, 00:06:42.647 "reconnect_delay_sec": 0, 00:06:42.647 "fast_io_fail_timeout_sec": 0, 00:06:42.647 "disable_auto_failback": false, 00:06:42.647 "generate_uuids": false, 00:06:42.647 "transport_tos": 0, 00:06:42.647 "nvme_error_stat": false, 00:06:42.647 "rdma_srq_size": 0, 00:06:42.647 "io_path_stat": false, 00:06:42.647 "allow_accel_sequence": false, 00:06:42.647 "rdma_max_cq_size": 0, 00:06:42.647 "rdma_cm_event_timeout_ms": 0, 00:06:42.647 "dhchap_digests": [ 00:06:42.647 "sha256", 00:06:42.647 "sha384", 00:06:42.647 "sha512" 00:06:42.647 ], 00:06:42.647 "dhchap_dhgroups": [ 00:06:42.647 "null", 00:06:42.647 "ffdhe2048", 00:06:42.647 "ffdhe3072", 00:06:42.647 "ffdhe4096", 00:06:42.647 "ffdhe6144", 00:06:42.647 "ffdhe8192" 00:06:42.647 ] 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "bdev_nvme_set_hotplug", 00:06:42.647 "params": { 00:06:42.647 "period_us": 100000, 00:06:42.647 "enable": false 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "bdev_wait_for_examine" 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "scsi", 00:06:42.647 "config": null 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "scheduler", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "framework_set_scheduler", 00:06:42.647 "params": { 00:06:42.647 "name": "static" 00:06:42.647 } 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "vhost_scsi", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "vhost_blk", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "ublk", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "nbd", 00:06:42.647 "config": [] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "nvmf", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "nvmf_set_config", 00:06:42.647 "params": { 00:06:42.647 "discovery_filter": "match_any", 00:06:42.647 "admin_cmd_passthru": { 00:06:42.647 "identify_ctrlr": false 00:06:42.647 } 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "nvmf_set_max_subsystems", 00:06:42.647 "params": { 00:06:42.647 "max_subsystems": 1024 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "nvmf_set_crdt", 00:06:42.647 "params": { 00:06:42.647 "crdt1": 0, 00:06:42.647 "crdt2": 0, 00:06:42.647 "crdt3": 0 00:06:42.647 } 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "method": "nvmf_create_transport", 00:06:42.647 "params": { 00:06:42.647 "trtype": "TCP", 00:06:42.647 "max_queue_depth": 128, 00:06:42.647 "max_io_qpairs_per_ctrlr": 127, 00:06:42.647 "in_capsule_data_size": 4096, 00:06:42.647 "max_io_size": 131072, 00:06:42.647 "io_unit_size": 131072, 00:06:42.647 "max_aq_depth": 128, 00:06:42.647 "num_shared_buffers": 511, 00:06:42.647 "buf_cache_size": 4294967295, 00:06:42.647 "dif_insert_or_strip": false, 00:06:42.647 "zcopy": false, 00:06:42.647 "c2h_success": true, 00:06:42.647 "sock_priority": 0, 00:06:42.647 "abort_timeout_sec": 1, 00:06:42.647 "ack_timeout": 0, 00:06:42.647 "data_wr_pool_size": 0 00:06:42.647 } 00:06:42.647 } 00:06:42.647 ] 00:06:42.647 }, 00:06:42.647 { 00:06:42.647 "subsystem": "iscsi", 00:06:42.647 "config": [ 00:06:42.647 { 00:06:42.647 "method": "iscsi_set_options", 00:06:42.647 "params": { 00:06:42.648 "node_base": "iqn.2016-06.io.spdk", 00:06:42.648 "max_sessions": 128, 00:06:42.648 "max_connections_per_session": 2, 00:06:42.648 "max_queue_depth": 64, 00:06:42.648 "default_time2wait": 2, 00:06:42.648 "default_time2retain": 20, 00:06:42.648 "first_burst_length": 8192, 00:06:42.648 "immediate_data": true, 00:06:42.648 "allow_duplicated_isid": false, 00:06:42.648 "error_recovery_level": 0, 00:06:42.648 "nop_timeout": 60, 00:06:42.648 "nop_in_interval": 30, 00:06:42.648 "disable_chap": false, 00:06:42.648 "require_chap": false, 00:06:42.648 "mutual_chap": false, 00:06:42.648 "chap_group": 0, 00:06:42.648 "max_large_datain_per_connection": 64, 00:06:42.648 "max_r2t_per_connection": 4, 00:06:42.648 "pdu_pool_size": 36864, 00:06:42.648 "immediate_data_pool_size": 16384, 00:06:42.648 "data_out_pool_size": 2048 00:06:42.648 } 00:06:42.648 } 00:06:42.648 ] 00:06:42.648 } 00:06:42.648 ] 00:06:42.648 } 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74461 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74461 ']' 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74461 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74461 00:06:42.648 killing process with pid 74461 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74461' 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74461 00:06:42.648 13:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74461 00:06:43.214 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74490 00:06:43.214 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:43.214 13:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74490 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74490 ']' 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74490 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74490 00:06:48.482 killing process with pid 74490 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74490' 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74490 00:06:48.482 13:05:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74490 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.741 00:06:48.741 real 0m7.282s 00:06:48.741 user 0m6.823s 00:06:48.741 sys 0m0.869s 00:06:48.741 ************************************ 00:06:48.741 END TEST skip_rpc_with_json 00:06:48.741 ************************************ 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:48.741 13:05:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:48.741 13:05:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.741 13:05:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.741 13:05:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.741 ************************************ 00:06:48.741 START TEST skip_rpc_with_delay 00:06:48.741 ************************************ 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:48.741 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:48.741 [2024-07-15 13:05:45.470616] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:48.741 [2024-07-15 13:05:45.470815] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.999 ************************************ 00:06:48.999 END TEST skip_rpc_with_delay 00:06:48.999 00:06:48.999 real 0m0.173s 00:06:48.999 user 0m0.100s 00:06:48.999 sys 0m0.071s 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.999 13:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:48.999 ************************************ 00:06:48.999 13:05:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:48.999 13:05:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:48.999 13:05:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:48.999 13:05:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.999 13:05:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.999 13:05:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.999 ************************************ 00:06:48.999 START TEST exit_on_failed_rpc_init 00:06:48.999 ************************************ 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74609 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74609 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74609 ']' 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.999 13:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.999 [2024-07-15 13:05:45.718842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.999 [2024-07-15 13:05:45.719066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74609 ] 00:06:49.258 [2024-07-15 13:05:45.864546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.258 [2024-07-15 13:05:45.974520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:50.192 13:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.192 [2024-07-15 13:05:46.781115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.192 [2024-07-15 13:05:46.782316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74627 ] 00:06:50.450 [2024-07-15 13:05:46.936115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.450 [2024-07-15 13:05:47.075116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.450 [2024-07-15 13:05:47.075301] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:50.450 [2024-07-15 13:05:47.075339] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:50.450 [2024-07-15 13:05:47.075381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74609 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74609 ']' 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74609 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74609 00:06:50.709 killing process with pid 74609 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74609' 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74609 00:06:50.709 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74609 00:06:51.276 00:06:51.276 real 0m2.168s 00:06:51.276 user 0m2.465s 00:06:51.276 sys 0m0.627s 00:06:51.276 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.276 13:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 END TEST exit_on_failed_rpc_init 00:06:51.276 ************************************ 00:06:51.276 13:05:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:51.276 00:06:51.276 real 0m15.448s 00:06:51.276 user 0m14.537s 00:06:51.276 sys 0m2.134s 00:06:51.276 13:05:47 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.276 ************************************ 00:06:51.276 END TEST skip_rpc 00:06:51.276 ************************************ 00:06:51.276 13:05:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 13:05:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.276 13:05:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.276 13:05:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.276 13:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 START TEST rpc_client 00:06:51.276 ************************************ 00:06:51.276 13:05:47 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.276 * Looking for test storage... 00:06:51.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:51.276 13:05:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:51.276 OK 00:06:51.535 13:05:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:51.535 00:06:51.535 real 0m0.150s 00:06:51.535 user 0m0.064s 00:06:51.535 sys 0m0.092s 00:06:51.535 13:05:48 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.535 13:05:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:51.535 ************************************ 00:06:51.535 END TEST rpc_client 00:06:51.535 ************************************ 00:06:51.535 13:05:48 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.535 13:05:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.535 13:05:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.535 13:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.535 ************************************ 00:06:51.535 START TEST json_config 00:06:51.535 ************************************ 00:06:51.535 13:05:48 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.535 13:05:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.535 13:05:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.535 13:05:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.535 13:05:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.535 13:05:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.535 13:05:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.535 13:05:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.535 13:05:48 json_config -- paths/export.sh@5 -- # export PATH 00:06:51.535 13:05:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@47 -- # : 0 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.535 13:05:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.535 13:05:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:51.536 WARNING: No tests are enabled so not running JSON configuration tests 00:06:51.536 13:05:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:51.536 00:06:51.536 real 0m0.082s 00:06:51.536 user 0m0.026s 00:06:51.536 sys 0m0.055s 00:06:51.536 13:05:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.536 13:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 ************************************ 00:06:51.536 END TEST json_config 00:06:51.536 ************************************ 00:06:51.536 13:05:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:51.536 13:05:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.536 13:05:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.536 13:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 ************************************ 00:06:51.536 START TEST json_config_extra_key 00:06:51.536 ************************************ 00:06:51.536 13:05:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:51.794 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c750bd6c-4972-4ac5-9386-4f497ffea9b0 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.794 13:05:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.794 13:05:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.794 13:05:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.794 13:05:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.794 13:05:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.794 13:05:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.794 13:05:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:51.794 13:05:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.794 13:05:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:51.795 INFO: launching applications... 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:51.795 13:05:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74785 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:51.795 Waiting for target to run... 00:06:51.795 13:05:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74785 /var/tmp/spdk_tgt.sock 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74785 ']' 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:51.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.795 13:05:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.795 [2024-07-15 13:05:48.424465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.795 [2024-07-15 13:05:48.424656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74785 ] 00:06:52.361 [2024-07-15 13:05:48.874568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.361 [2024-07-15 13:05:48.953007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.620 00:06:52.620 INFO: shutting down applications... 00:06:52.620 13:05:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.620 13:05:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:52.620 13:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:52.620 13:05:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74785 ]] 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74785 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74785 00:06:52.620 13:05:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.188 13:05:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.188 13:05:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.188 13:05:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74785 00:06:53.188 13:05:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74785 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.756 13:05:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.756 SPDK target shutdown done 00:06:53.756 Success 00:06:53.756 13:05:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:53.756 00:06:53.756 real 0m2.128s 00:06:53.756 user 0m1.480s 00:06:53.756 sys 0m0.559s 00:06:53.756 ************************************ 00:06:53.756 END TEST json_config_extra_key 00:06:53.756 ************************************ 00:06:53.756 13:05:50 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.756 13:05:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.756 13:05:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.756 13:05:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.756 13:05:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.756 13:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:53.756 ************************************ 00:06:53.756 START TEST alias_rpc 00:06:53.756 ************************************ 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.756 * Looking for test storage... 00:06:53.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:53.756 13:05:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.756 13:05:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74857 00:06:53.756 13:05:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.756 13:05:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74857 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74857 ']' 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.756 13:05:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.019 [2024-07-15 13:05:50.606450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.019 [2024-07-15 13:05:50.606852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74857 ] 00:06:54.292 [2024-07-15 13:05:50.757644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.292 [2024-07-15 13:05:50.865538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.857 13:05:51 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.857 13:05:51 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:54.857 13:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:55.116 13:05:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74857 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74857 ']' 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74857 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74857 00:06:55.116 killing process with pid 74857 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74857' 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 74857 00:06:55.116 13:05:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 74857 00:06:55.683 00:06:55.683 real 0m1.869s 00:06:55.683 user 0m1.972s 00:06:55.683 sys 0m0.551s 00:06:55.683 13:05:52 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.683 13:05:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.683 ************************************ 00:06:55.683 END TEST alias_rpc 00:06:55.683 ************************************ 00:06:55.683 13:05:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:55.683 13:05:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.683 13:05:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.683 13:05:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.683 13:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.683 ************************************ 00:06:55.683 START TEST spdkcli_tcp 00:06:55.683 ************************************ 00:06:55.683 13:05:52 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.683 * Looking for test storage... 00:06:55.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:55.683 13:05:52 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:55.683 13:05:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.683 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74934 00:06:55.684 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74934 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 74934 ']' 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.684 13:05:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.684 13:05:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.942 [2024-07-15 13:05:52.590651] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.942 [2024-07-15 13:05:52.590846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74934 ] 00:06:56.201 [2024-07-15 13:05:52.741795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.201 [2024-07-15 13:05:52.828828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.201 [2024-07-15 13:05:52.828923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.136 13:05:53 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.136 13:05:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:57.136 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74951 00:06:57.136 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.136 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.136 [ 00:06:57.136 "bdev_malloc_delete", 00:06:57.136 "bdev_malloc_create", 00:06:57.136 "bdev_null_resize", 00:06:57.136 "bdev_null_delete", 00:06:57.136 "bdev_null_create", 00:06:57.136 "bdev_nvme_cuse_unregister", 00:06:57.136 "bdev_nvme_cuse_register", 00:06:57.136 "bdev_opal_new_user", 00:06:57.136 "bdev_opal_set_lock_state", 00:06:57.136 "bdev_opal_delete", 00:06:57.136 "bdev_opal_get_info", 00:06:57.136 "bdev_opal_create", 00:06:57.136 "bdev_nvme_opal_revert", 00:06:57.136 "bdev_nvme_opal_init", 00:06:57.136 "bdev_nvme_send_cmd", 00:06:57.136 "bdev_nvme_get_path_iostat", 00:06:57.136 "bdev_nvme_get_mdns_discovery_info", 00:06:57.136 "bdev_nvme_stop_mdns_discovery", 00:06:57.136 "bdev_nvme_start_mdns_discovery", 00:06:57.136 "bdev_nvme_set_multipath_policy", 00:06:57.136 "bdev_nvme_set_preferred_path", 00:06:57.136 "bdev_nvme_get_io_paths", 00:06:57.136 "bdev_nvme_remove_error_injection", 00:06:57.136 "bdev_nvme_add_error_injection", 00:06:57.136 "bdev_nvme_get_discovery_info", 00:06:57.136 "bdev_nvme_stop_discovery", 00:06:57.136 "bdev_nvme_start_discovery", 00:06:57.136 "bdev_nvme_get_controller_health_info", 00:06:57.136 "bdev_nvme_disable_controller", 00:06:57.136 "bdev_nvme_enable_controller", 00:06:57.136 "bdev_nvme_reset_controller", 00:06:57.136 "bdev_nvme_get_transport_statistics", 00:06:57.136 "bdev_nvme_apply_firmware", 00:06:57.136 "bdev_nvme_detach_controller", 00:06:57.136 "bdev_nvme_get_controllers", 00:06:57.136 "bdev_nvme_attach_controller", 00:06:57.136 "bdev_nvme_set_hotplug", 00:06:57.136 "bdev_nvme_set_options", 00:06:57.136 "bdev_passthru_delete", 00:06:57.136 "bdev_passthru_create", 00:06:57.136 "bdev_lvol_set_parent_bdev", 00:06:57.136 "bdev_lvol_set_parent", 00:06:57.136 "bdev_lvol_check_shallow_copy", 00:06:57.136 "bdev_lvol_start_shallow_copy", 00:06:57.136 "bdev_lvol_grow_lvstore", 00:06:57.136 "bdev_lvol_get_lvols", 00:06:57.136 "bdev_lvol_get_lvstores", 00:06:57.136 "bdev_lvol_delete", 00:06:57.136 "bdev_lvol_set_read_only", 00:06:57.136 "bdev_lvol_resize", 00:06:57.136 "bdev_lvol_decouple_parent", 00:06:57.136 "bdev_lvol_inflate", 00:06:57.136 "bdev_lvol_rename", 00:06:57.137 "bdev_lvol_clone_bdev", 00:06:57.137 "bdev_lvol_clone", 00:06:57.137 "bdev_lvol_snapshot", 00:06:57.137 "bdev_lvol_create", 00:06:57.137 "bdev_lvol_delete_lvstore", 00:06:57.137 "bdev_lvol_rename_lvstore", 00:06:57.137 "bdev_lvol_create_lvstore", 00:06:57.137 "bdev_raid_set_options", 00:06:57.137 "bdev_raid_remove_base_bdev", 00:06:57.137 "bdev_raid_add_base_bdev", 00:06:57.137 "bdev_raid_delete", 00:06:57.137 "bdev_raid_create", 00:06:57.137 "bdev_raid_get_bdevs", 00:06:57.137 "bdev_error_inject_error", 00:06:57.137 "bdev_error_delete", 00:06:57.137 "bdev_error_create", 00:06:57.137 "bdev_split_delete", 00:06:57.137 "bdev_split_create", 00:06:57.137 "bdev_delay_delete", 00:06:57.137 "bdev_delay_create", 00:06:57.137 "bdev_delay_update_latency", 00:06:57.137 "bdev_zone_block_delete", 00:06:57.137 "bdev_zone_block_create", 00:06:57.137 "blobfs_create", 00:06:57.137 "blobfs_detect", 00:06:57.137 "blobfs_set_cache_size", 00:06:57.137 "bdev_xnvme_delete", 00:06:57.137 "bdev_xnvme_create", 00:06:57.137 "bdev_aio_delete", 00:06:57.137 "bdev_aio_rescan", 00:06:57.137 "bdev_aio_create", 00:06:57.137 "bdev_ftl_set_property", 00:06:57.137 "bdev_ftl_get_properties", 00:06:57.137 "bdev_ftl_get_stats", 00:06:57.137 "bdev_ftl_unmap", 00:06:57.137 "bdev_ftl_unload", 00:06:57.137 "bdev_ftl_delete", 00:06:57.137 "bdev_ftl_load", 00:06:57.137 "bdev_ftl_create", 00:06:57.137 "bdev_virtio_attach_controller", 00:06:57.137 "bdev_virtio_scsi_get_devices", 00:06:57.137 "bdev_virtio_detach_controller", 00:06:57.137 "bdev_virtio_blk_set_hotplug", 00:06:57.137 "bdev_iscsi_delete", 00:06:57.137 "bdev_iscsi_create", 00:06:57.137 "bdev_iscsi_set_options", 00:06:57.137 "accel_error_inject_error", 00:06:57.137 "ioat_scan_accel_module", 00:06:57.137 "dsa_scan_accel_module", 00:06:57.137 "iaa_scan_accel_module", 00:06:57.137 "keyring_file_remove_key", 00:06:57.137 "keyring_file_add_key", 00:06:57.137 "keyring_linux_set_options", 00:06:57.137 "iscsi_get_histogram", 00:06:57.137 "iscsi_enable_histogram", 00:06:57.137 "iscsi_set_options", 00:06:57.137 "iscsi_get_auth_groups", 00:06:57.137 "iscsi_auth_group_remove_secret", 00:06:57.137 "iscsi_auth_group_add_secret", 00:06:57.137 "iscsi_delete_auth_group", 00:06:57.137 "iscsi_create_auth_group", 00:06:57.137 "iscsi_set_discovery_auth", 00:06:57.137 "iscsi_get_options", 00:06:57.137 "iscsi_target_node_request_logout", 00:06:57.137 "iscsi_target_node_set_redirect", 00:06:57.137 "iscsi_target_node_set_auth", 00:06:57.137 "iscsi_target_node_add_lun", 00:06:57.137 "iscsi_get_stats", 00:06:57.137 "iscsi_get_connections", 00:06:57.137 "iscsi_portal_group_set_auth", 00:06:57.137 "iscsi_start_portal_group", 00:06:57.137 "iscsi_delete_portal_group", 00:06:57.137 "iscsi_create_portal_group", 00:06:57.137 "iscsi_get_portal_groups", 00:06:57.137 "iscsi_delete_target_node", 00:06:57.137 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.137 "iscsi_target_node_add_pg_ig_maps", 00:06:57.137 "iscsi_create_target_node", 00:06:57.137 "iscsi_get_target_nodes", 00:06:57.137 "iscsi_delete_initiator_group", 00:06:57.137 "iscsi_initiator_group_remove_initiators", 00:06:57.137 "iscsi_initiator_group_add_initiators", 00:06:57.137 "iscsi_create_initiator_group", 00:06:57.137 "iscsi_get_initiator_groups", 00:06:57.137 "nvmf_set_crdt", 00:06:57.137 "nvmf_set_config", 00:06:57.137 "nvmf_set_max_subsystems", 00:06:57.137 "nvmf_stop_mdns_prr", 00:06:57.137 "nvmf_publish_mdns_prr", 00:06:57.137 "nvmf_subsystem_get_listeners", 00:06:57.137 "nvmf_subsystem_get_qpairs", 00:06:57.137 "nvmf_subsystem_get_controllers", 00:06:57.137 "nvmf_get_stats", 00:06:57.137 "nvmf_get_transports", 00:06:57.137 "nvmf_create_transport", 00:06:57.137 "nvmf_get_targets", 00:06:57.137 "nvmf_delete_target", 00:06:57.137 "nvmf_create_target", 00:06:57.137 "nvmf_subsystem_allow_any_host", 00:06:57.137 "nvmf_subsystem_remove_host", 00:06:57.137 "nvmf_subsystem_add_host", 00:06:57.137 "nvmf_ns_remove_host", 00:06:57.137 "nvmf_ns_add_host", 00:06:57.137 "nvmf_subsystem_remove_ns", 00:06:57.137 "nvmf_subsystem_add_ns", 00:06:57.137 "nvmf_subsystem_listener_set_ana_state", 00:06:57.137 "nvmf_discovery_get_referrals", 00:06:57.137 "nvmf_discovery_remove_referral", 00:06:57.137 "nvmf_discovery_add_referral", 00:06:57.137 "nvmf_subsystem_remove_listener", 00:06:57.137 "nvmf_subsystem_add_listener", 00:06:57.137 "nvmf_delete_subsystem", 00:06:57.137 "nvmf_create_subsystem", 00:06:57.137 "nvmf_get_subsystems", 00:06:57.137 "env_dpdk_get_mem_stats", 00:06:57.137 "nbd_get_disks", 00:06:57.137 "nbd_stop_disk", 00:06:57.137 "nbd_start_disk", 00:06:57.137 "ublk_recover_disk", 00:06:57.137 "ublk_get_disks", 00:06:57.137 "ublk_stop_disk", 00:06:57.137 "ublk_start_disk", 00:06:57.137 "ublk_destroy_target", 00:06:57.137 "ublk_create_target", 00:06:57.137 "virtio_blk_create_transport", 00:06:57.137 "virtio_blk_get_transports", 00:06:57.137 "vhost_controller_set_coalescing", 00:06:57.137 "vhost_get_controllers", 00:06:57.137 "vhost_delete_controller", 00:06:57.137 "vhost_create_blk_controller", 00:06:57.137 "vhost_scsi_controller_remove_target", 00:06:57.137 "vhost_scsi_controller_add_target", 00:06:57.137 "vhost_start_scsi_controller", 00:06:57.137 "vhost_create_scsi_controller", 00:06:57.137 "thread_set_cpumask", 00:06:57.137 "framework_get_scheduler", 00:06:57.137 "framework_set_scheduler", 00:06:57.137 "framework_get_reactors", 00:06:57.137 "thread_get_io_channels", 00:06:57.137 "thread_get_pollers", 00:06:57.137 "thread_get_stats", 00:06:57.137 "framework_monitor_context_switch", 00:06:57.137 "spdk_kill_instance", 00:06:57.137 "log_enable_timestamps", 00:06:57.137 "log_get_flags", 00:06:57.137 "log_clear_flag", 00:06:57.137 "log_set_flag", 00:06:57.137 "log_get_level", 00:06:57.137 "log_set_level", 00:06:57.137 "log_get_print_level", 00:06:57.137 "log_set_print_level", 00:06:57.137 "framework_enable_cpumask_locks", 00:06:57.137 "framework_disable_cpumask_locks", 00:06:57.137 "framework_wait_init", 00:06:57.137 "framework_start_init", 00:06:57.137 "scsi_get_devices", 00:06:57.137 "bdev_get_histogram", 00:06:57.137 "bdev_enable_histogram", 00:06:57.137 "bdev_set_qos_limit", 00:06:57.137 "bdev_set_qd_sampling_period", 00:06:57.137 "bdev_get_bdevs", 00:06:57.137 "bdev_reset_iostat", 00:06:57.137 "bdev_get_iostat", 00:06:57.137 "bdev_examine", 00:06:57.137 "bdev_wait_for_examine", 00:06:57.137 "bdev_set_options", 00:06:57.137 "notify_get_notifications", 00:06:57.137 "notify_get_types", 00:06:57.137 "accel_get_stats", 00:06:57.137 "accel_set_options", 00:06:57.137 "accel_set_driver", 00:06:57.137 "accel_crypto_key_destroy", 00:06:57.137 "accel_crypto_keys_get", 00:06:57.137 "accel_crypto_key_create", 00:06:57.137 "accel_assign_opc", 00:06:57.137 "accel_get_module_info", 00:06:57.137 "accel_get_opc_assignments", 00:06:57.137 "vmd_rescan", 00:06:57.137 "vmd_remove_device", 00:06:57.137 "vmd_enable", 00:06:57.137 "sock_get_default_impl", 00:06:57.137 "sock_set_default_impl", 00:06:57.137 "sock_impl_set_options", 00:06:57.137 "sock_impl_get_options", 00:06:57.137 "iobuf_get_stats", 00:06:57.137 "iobuf_set_options", 00:06:57.137 "framework_get_pci_devices", 00:06:57.137 "framework_get_config", 00:06:57.137 "framework_get_subsystems", 00:06:57.137 "trace_get_info", 00:06:57.137 "trace_get_tpoint_group_mask", 00:06:57.137 "trace_disable_tpoint_group", 00:06:57.137 "trace_enable_tpoint_group", 00:06:57.137 "trace_clear_tpoint_mask", 00:06:57.137 "trace_set_tpoint_mask", 00:06:57.137 "keyring_get_keys", 00:06:57.137 "spdk_get_version", 00:06:57.137 "rpc_get_methods" 00:06:57.137 ] 00:06:57.137 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.137 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.137 13:05:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74934 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 74934 ']' 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 74934 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74934 00:06:57.137 killing process with pid 74934 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74934' 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 74934 00:06:57.137 13:05:53 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 74934 00:06:57.702 00:06:57.702 real 0m2.013s 00:06:57.702 user 0m3.526s 00:06:57.702 sys 0m0.639s 00:06:57.702 13:05:54 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.702 13:05:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.702 ************************************ 00:06:57.702 END TEST spdkcli_tcp 00:06:57.702 ************************************ 00:06:57.702 13:05:54 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.702 13:05:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.702 13:05:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.702 13:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.702 ************************************ 00:06:57.702 START TEST dpdk_mem_utility 00:06:57.702 ************************************ 00:06:57.702 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.962 * Looking for test storage... 00:06:57.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:57.962 13:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:57.962 13:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75025 00:06:57.962 13:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.962 13:05:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75025 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75025 ']' 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.962 13:05:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:57.962 [2024-07-15 13:05:54.616123] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.962 [2024-07-15 13:05:54.616322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75025 ] 00:06:58.220 [2024-07-15 13:05:54.762159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.220 [2024-07-15 13:05:54.830256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.786 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.786 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:58.786 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:58.786 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:58.786 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.786 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.786 { 00:06:58.786 "filename": "/tmp/spdk_mem_dump.txt" 00:06:58.786 } 00:06:58.786 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.786 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:58.786 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:58.786 1 heaps totaling size 814.000000 MiB 00:06:58.786 size: 814.000000 MiB heap id: 0 00:06:58.786 end heaps---------- 00:06:58.786 8 mempools totaling size 598.116089 MiB 00:06:58.786 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:58.786 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:58.786 size: 84.521057 MiB name: bdev_io_75025 00:06:58.786 size: 51.011292 MiB name: evtpool_75025 00:06:58.786 size: 50.003479 MiB name: msgpool_75025 00:06:58.786 size: 21.763794 MiB name: PDU_Pool 00:06:58.786 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:58.786 size: 0.026123 MiB name: Session_Pool 00:06:58.786 end mempools------- 00:06:58.786 6 memzones totaling size 4.142822 MiB 00:06:58.786 size: 1.000366 MiB name: RG_ring_0_75025 00:06:58.786 size: 1.000366 MiB name: RG_ring_1_75025 00:06:58.786 size: 1.000366 MiB name: RG_ring_4_75025 00:06:58.786 size: 1.000366 MiB name: RG_ring_5_75025 00:06:58.786 size: 0.125366 MiB name: RG_ring_2_75025 00:06:58.786 size: 0.015991 MiB name: RG_ring_3_75025 00:06:58.786 end memzones------- 00:06:58.786 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.046 heap id: 0 total size: 814.000000 MiB number of busy elements: 306 number of free elements: 15 00:06:59.046 list of free elements. size: 12.470825 MiB 00:06:59.046 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:59.046 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:59.046 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:59.046 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:59.046 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:59.046 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:59.046 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:59.046 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:59.046 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:59.046 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:06:59.046 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:59.046 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:59.046 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:59.046 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:59.046 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:59.046 list of standard malloc elements. size: 199.266602 MiB 00:06:59.046 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:59.046 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:59.046 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:59.046 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:59.046 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:59.046 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:59.046 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:59.046 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:59.046 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:59.046 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:59.046 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:59.047 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:59.048 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:59.048 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:59.048 list of memzone associated elements. size: 602.262573 MiB 00:06:59.048 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:59.048 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.048 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:59.048 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.048 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:59.048 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75025_0 00:06:59.048 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:59.048 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75025_0 00:06:59.048 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:59.048 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75025_0 00:06:59.048 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:59.048 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.048 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:59.048 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.048 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:59.048 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75025 00:06:59.048 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:59.048 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75025 00:06:59.048 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:59.048 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75025 00:06:59.048 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:59.048 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.048 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:59.048 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.048 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:59.048 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.048 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:59.048 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.048 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:59.048 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75025 00:06:59.048 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:59.048 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75025 00:06:59.048 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:59.048 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75025 00:06:59.048 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:59.048 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75025 00:06:59.048 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:59.048 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75025 00:06:59.048 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:59.048 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.048 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:59.048 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.048 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:59.048 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.048 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:59.048 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75025 00:06:59.048 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:59.048 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.048 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:59.048 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.048 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:59.048 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75025 00:06:59.048 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:59.048 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.048 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:59.048 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75025 00:06:59.048 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:59.048 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75025 00:06:59.048 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:59.048 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.048 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.049 13:05:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75025 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75025 ']' 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75025 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75025 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.049 killing process with pid 75025 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75025' 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75025 00:06:59.049 13:05:55 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75025 00:06:59.616 ************************************ 00:06:59.616 END TEST dpdk_mem_utility 00:06:59.616 ************************************ 00:06:59.616 00:06:59.616 real 0m1.712s 00:06:59.616 user 0m1.684s 00:06:59.616 sys 0m0.529s 00:06:59.616 13:05:56 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.616 13:05:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.616 13:05:56 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:59.616 13:05:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.616 13:05:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.616 13:05:56 -- common/autotest_common.sh@10 -- # set +x 00:06:59.616 ************************************ 00:06:59.616 START TEST event 00:06:59.616 ************************************ 00:06:59.616 13:05:56 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:59.616 * Looking for test storage... 00:06:59.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:59.616 13:05:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.616 13:05:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.616 13:05:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.616 13:05:56 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:59.616 13:05:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.616 13:05:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.616 ************************************ 00:06:59.616 START TEST event_perf 00:06:59.616 ************************************ 00:06:59.616 13:05:56 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.616 Running I/O for 1 seconds...[2024-07-15 13:05:56.287757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.616 [2024-07-15 13:05:56.288025] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75099 ] 00:06:59.874 [2024-07-15 13:05:56.441423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.874 [2024-07-15 13:05:56.538492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.874 [2024-07-15 13:05:56.538618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.874 [2024-07-15 13:05:56.538534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.874 Running I/O for 1 seconds...[2024-07-15 13:05:56.538689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.249 00:07:01.249 lcore 0: 108035 00:07:01.249 lcore 1: 108031 00:07:01.249 lcore 2: 108032 00:07:01.249 lcore 3: 108035 00:07:01.249 done. 00:07:01.249 00:07:01.249 real 0m1.394s 00:07:01.249 user 0m4.122s 00:07:01.249 sys 0m0.130s 00:07:01.249 13:05:57 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.249 13:05:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.249 ************************************ 00:07:01.249 END TEST event_perf 00:07:01.249 ************************************ 00:07:01.249 13:05:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:01.249 13:05:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:01.249 13:05:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.249 13:05:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.249 ************************************ 00:07:01.249 START TEST event_reactor 00:07:01.249 ************************************ 00:07:01.249 13:05:57 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:01.249 [2024-07-15 13:05:57.738892] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.249 [2024-07-15 13:05:57.739107] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75138 ] 00:07:01.249 [2024-07-15 13:05:57.891997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.508 [2024-07-15 13:05:57.988607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.440 test_start 00:07:02.440 oneshot 00:07:02.440 tick 100 00:07:02.440 tick 100 00:07:02.440 tick 250 00:07:02.440 tick 100 00:07:02.440 tick 100 00:07:02.440 tick 100 00:07:02.440 tick 250 00:07:02.440 tick 500 00:07:02.440 tick 100 00:07:02.440 tick 100 00:07:02.440 tick 250 00:07:02.440 tick 100 00:07:02.440 tick 100 00:07:02.440 test_end 00:07:02.440 00:07:02.440 real 0m1.386s 00:07:02.440 user 0m1.184s 00:07:02.440 sys 0m0.094s 00:07:02.440 13:05:59 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.440 ************************************ 00:07:02.440 END TEST event_reactor 00:07:02.440 13:05:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.440 ************************************ 00:07:02.440 13:05:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.440 13:05:59 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:02.440 13:05:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.440 13:05:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.440 ************************************ 00:07:02.440 START TEST event_reactor_perf 00:07:02.440 ************************************ 00:07:02.440 13:05:59 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.440 [2024-07-15 13:05:59.175643] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.440 [2024-07-15 13:05:59.175856] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:07:02.697 [2024-07-15 13:05:59.319954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.697 [2024-07-15 13:05:59.421599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.070 test_start 00:07:04.070 test_end 00:07:04.070 Performance: 271103 events per second 00:07:04.070 00:07:04.070 real 0m1.375s 00:07:04.070 user 0m1.179s 00:07:04.070 sys 0m0.088s 00:07:04.070 13:06:00 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.070 ************************************ 00:07:04.070 END TEST event_reactor_perf 00:07:04.070 ************************************ 00:07:04.070 13:06:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.070 13:06:00 event -- event/event.sh@49 -- # uname -s 00:07:04.070 13:06:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.070 13:06:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.070 13:06:00 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.070 13:06:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.070 13:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.070 ************************************ 00:07:04.070 START TEST event_scheduler 00:07:04.070 ************************************ 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.070 * Looking for test storage... 00:07:04.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:04.070 13:06:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.070 13:06:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75235 00:07:04.070 13:06:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.070 13:06:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.070 13:06:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75235 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 75235 ']' 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.070 13:06:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.070 [2024-07-15 13:06:00.763555] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.070 [2024-07-15 13:06:00.763751] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75235 ] 00:07:04.328 [2024-07-15 13:06:00.917704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.328 [2024-07-15 13:06:01.065821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.328 [2024-07-15 13:06:01.065928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.328 [2024-07-15 13:06:01.066068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.328 [2024-07-15 13:06:01.066124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:05.260 13:06:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.260 POWER: Env isn't set yet! 00:07:05.260 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:05.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.260 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.260 POWER: Attempting to initialise PSTAT power management... 00:07:05.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.260 POWER: Cannot set governor of lcore 0 to performance 00:07:05.260 POWER: Attempting to initialise CPPC power management... 00:07:05.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.260 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.260 POWER: Attempting to initialise VM power management... 00:07:05.260 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:05.260 POWER: Unable to set Power Management Environment for lcore 0 00:07:05.260 [2024-07-15 13:06:01.740316] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:05.260 [2024-07-15 13:06:01.740367] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:05.260 [2024-07-15 13:06:01.740385] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.260 [2024-07-15 13:06:01.740423] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.260 [2024-07-15 13:06:01.740456] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.260 [2024-07-15 13:06:01.740482] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.260 13:06:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.260 [2024-07-15 13:06:01.890148] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.260 13:06:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.260 13:06:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.260 ************************************ 00:07:05.260 START TEST scheduler_create_thread 00:07:05.260 ************************************ 00:07:05.260 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:05.260 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.260 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.260 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 2 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 3 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 4 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 5 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 6 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 7 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.261 8 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.261 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.519 9 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.519 10 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.519 13:06:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.893 13:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.893 13:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:06.893 13:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:06.893 13:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.893 13:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.459 13:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.459 13:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:07.459 13:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.459 13:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.420 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.420 13:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:08.420 13:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:08.420 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.420 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.356 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.356 00:07:09.356 real 0m3.890s 00:07:09.356 user 0m0.014s 00:07:09.356 sys 0m0.012s 00:07:09.356 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.356 ************************************ 00:07:09.356 END TEST scheduler_create_thread 00:07:09.356 ************************************ 00:07:09.356 13:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.356 13:06:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:09.356 13:06:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75235 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 75235 ']' 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 75235 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75235 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:09.356 killing process with pid 75235 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75235' 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 75235 00:07:09.356 13:06:05 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 75235 00:07:09.614 [2024-07-15 13:06:06.173674] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.182 00:07:10.182 real 0m6.054s 00:07:10.182 user 0m12.463s 00:07:10.182 sys 0m0.557s 00:07:10.182 13:06:06 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.182 ************************************ 00:07:10.182 END TEST event_scheduler 00:07:10.182 ************************************ 00:07:10.182 13:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.182 13:06:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.182 13:06:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.182 13:06:06 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.182 13:06:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.182 13:06:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.182 ************************************ 00:07:10.182 START TEST app_repeat 00:07:10.182 ************************************ 00:07:10.182 13:06:06 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.182 13:06:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75352 00:07:10.182 Process app_repeat pid: 75352 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75352' 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.183 spdk_app_start Round 0 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.183 13:06:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75352 /var/tmp/spdk-nbd.sock 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75352 ']' 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.183 13:06:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.183 [2024-07-15 13:06:06.737475] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:10.183 [2024-07-15 13:06:06.737661] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75352 ] 00:07:10.183 [2024-07-15 13:06:06.881583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.441 [2024-07-15 13:06:06.988407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.441 [2024-07-15 13:06:06.988505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.008 13:06:07 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.008 13:06:07 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:11.008 13:06:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.277 Malloc0 00:07:11.277 13:06:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.535 Malloc1 00:07:11.535 13:06:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.535 13:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.794 /dev/nbd0 00:07:11.794 13:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.052 13:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.052 1+0 records in 00:07:12.052 1+0 records out 00:07:12.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581314 s, 7.0 MB/s 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:12.052 13:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:12.052 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.052 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.052 13:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.309 /dev/nbd1 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.309 1+0 records in 00:07:12.309 1+0 records out 00:07:12.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407944 s, 10.0 MB/s 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:12.309 13:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.309 13:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.568 { 00:07:12.568 "nbd_device": "/dev/nbd0", 00:07:12.568 "bdev_name": "Malloc0" 00:07:12.568 }, 00:07:12.568 { 00:07:12.568 "nbd_device": "/dev/nbd1", 00:07:12.568 "bdev_name": "Malloc1" 00:07:12.568 } 00:07:12.568 ]' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.568 { 00:07:12.568 "nbd_device": "/dev/nbd0", 00:07:12.568 "bdev_name": "Malloc0" 00:07:12.568 }, 00:07:12.568 { 00:07:12.568 "nbd_device": "/dev/nbd1", 00:07:12.568 "bdev_name": "Malloc1" 00:07:12.568 } 00:07:12.568 ]' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.568 /dev/nbd1' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.568 /dev/nbd1' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.568 256+0 records in 00:07:12.568 256+0 records out 00:07:12.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00912714 s, 115 MB/s 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.568 256+0 records in 00:07:12.568 256+0 records out 00:07:12.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335252 s, 31.3 MB/s 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.568 256+0 records in 00:07:12.568 256+0 records out 00:07:12.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373257 s, 28.1 MB/s 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.568 13:06:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.827 13:06:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.085 13:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.343 13:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.343 13:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.343 13:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.601 13:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.601 13:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.601 13:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.601 13:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.602 13:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.602 13:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.602 13:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.602 13:06:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.602 13:06:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.602 13:06:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.860 13:06:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.118 [2024-07-15 13:06:10.630478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.118 [2024-07-15 13:06:10.731128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.118 [2024-07-15 13:06:10.731138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.118 [2024-07-15 13:06:10.792310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.119 [2024-07-15 13:06:10.792384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.399 spdk_app_start Round 1 00:07:17.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.399 13:06:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.399 13:06:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.399 13:06:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75352 /var/tmp/spdk-nbd.sock 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75352 ']' 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.399 13:06:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:17.399 13:06:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.399 Malloc0 00:07:17.399 13:06:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.657 Malloc1 00:07:17.657 13:06:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.657 13:06:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.916 /dev/nbd0 00:07:17.916 13:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.916 13:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.916 1+0 records in 00:07:17.916 1+0 records out 00:07:17.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299074 s, 13.7 MB/s 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:17.916 13:06:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:17.916 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.916 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.916 13:06:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.176 /dev/nbd1 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.176 1+0 records in 00:07:18.176 1+0 records out 00:07:18.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737054 s, 5.6 MB/s 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.176 13:06:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.176 13:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.434 { 00:07:18.434 "nbd_device": "/dev/nbd0", 00:07:18.434 "bdev_name": "Malloc0" 00:07:18.434 }, 00:07:18.434 { 00:07:18.434 "nbd_device": "/dev/nbd1", 00:07:18.434 "bdev_name": "Malloc1" 00:07:18.434 } 00:07:18.434 ]' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.434 { 00:07:18.434 "nbd_device": "/dev/nbd0", 00:07:18.434 "bdev_name": "Malloc0" 00:07:18.434 }, 00:07:18.434 { 00:07:18.434 "nbd_device": "/dev/nbd1", 00:07:18.434 "bdev_name": "Malloc1" 00:07:18.434 } 00:07:18.434 ]' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.434 /dev/nbd1' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.434 /dev/nbd1' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.434 256+0 records in 00:07:18.434 256+0 records out 00:07:18.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784788 s, 134 MB/s 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.434 13:06:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.693 256+0 records in 00:07:18.693 256+0 records out 00:07:18.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323203 s, 32.4 MB/s 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.693 256+0 records in 00:07:18.693 256+0 records out 00:07:18.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0371013 s, 28.3 MB/s 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.693 13:06:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.952 13:06:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.211 13:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.469 13:06:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.469 13:06:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.727 13:06:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.985 [2024-07-15 13:06:16.616672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.985 [2024-07-15 13:06:16.718261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.985 [2024-07-15 13:06:16.718272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.243 [2024-07-15 13:06:16.776178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:20.243 [2024-07-15 13:06:16.776281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.837 spdk_app_start Round 2 00:07:22.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.837 13:06:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.837 13:06:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.837 13:06:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75352 /var/tmp/spdk-nbd.sock 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75352 ']' 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.837 13:06:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.095 13:06:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.095 13:06:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:23.095 13:06:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.353 Malloc0 00:07:23.353 13:06:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.611 Malloc1 00:07:23.611 13:06:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.611 13:06:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.870 /dev/nbd0 00:07:23.870 13:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.870 13:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.870 1+0 records in 00:07:23.870 1+0 records out 00:07:23.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351902 s, 11.6 MB/s 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:23.870 13:06:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:23.870 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.870 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.870 13:06:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.128 /dev/nbd1 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.128 1+0 records in 00:07:24.128 1+0 records out 00:07:24.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470593 s, 8.7 MB/s 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:24.128 13:06:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.128 13:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.387 { 00:07:24.387 "nbd_device": "/dev/nbd0", 00:07:24.387 "bdev_name": "Malloc0" 00:07:24.387 }, 00:07:24.387 { 00:07:24.387 "nbd_device": "/dev/nbd1", 00:07:24.387 "bdev_name": "Malloc1" 00:07:24.387 } 00:07:24.387 ]' 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.387 { 00:07:24.387 "nbd_device": "/dev/nbd0", 00:07:24.387 "bdev_name": "Malloc0" 00:07:24.387 }, 00:07:24.387 { 00:07:24.387 "nbd_device": "/dev/nbd1", 00:07:24.387 "bdev_name": "Malloc1" 00:07:24.387 } 00:07:24.387 ]' 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.387 /dev/nbd1' 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.387 /dev/nbd1' 00:07:24.387 13:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.648 256+0 records in 00:07:24.648 256+0 records out 00:07:24.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753703 s, 139 MB/s 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.648 256+0 records in 00:07:24.648 256+0 records out 00:07:24.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310886 s, 33.7 MB/s 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.648 256+0 records in 00:07:24.648 256+0 records out 00:07:24.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358714 s, 29.2 MB/s 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.648 13:06:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.907 13:06:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.164 13:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.422 13:06:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.422 13:06:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.989 13:06:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.989 [2024-07-15 13:06:22.694555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.247 [2024-07-15 13:06:22.787573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.247 [2024-07-15 13:06:22.787576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.247 [2024-07-15 13:06:22.847549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:26.247 [2024-07-15 13:06:22.847624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:28.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.779 13:06:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75352 /var/tmp/spdk-nbd.sock 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75352 ']' 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.779 13:06:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:29.346 13:06:25 event.app_repeat -- event/event.sh@39 -- # killprocess 75352 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75352 ']' 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75352 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75352 00:07:29.346 killing process with pid 75352 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75352' 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75352 00:07:29.346 13:06:25 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75352 00:07:29.346 spdk_app_start is called in Round 0. 00:07:29.346 Shutdown signal received, stop current app iteration 00:07:29.346 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:29.346 spdk_app_start is called in Round 1. 00:07:29.346 Shutdown signal received, stop current app iteration 00:07:29.346 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:29.346 spdk_app_start is called in Round 2. 00:07:29.346 Shutdown signal received, stop current app iteration 00:07:29.346 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:29.346 spdk_app_start is called in Round 3. 00:07:29.346 Shutdown signal received, stop current app iteration 00:07:29.346 13:06:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:29.346 13:06:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:29.346 00:07:29.346 real 0m19.342s 00:07:29.346 user 0m43.389s 00:07:29.346 sys 0m2.900s 00:07:29.346 13:06:26 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.346 ************************************ 00:07:29.346 END TEST app_repeat 00:07:29.346 ************************************ 00:07:29.346 13:06:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.346 13:06:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:29.346 13:06:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:29.346 13:06:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.346 13:06:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.346 13:06:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.346 ************************************ 00:07:29.346 START TEST cpu_locks 00:07:29.346 ************************************ 00:07:29.346 13:06:26 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:29.605 * Looking for test storage... 00:07:29.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:29.605 13:06:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:29.605 13:06:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:29.605 13:06:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:29.605 13:06:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:29.605 13:06:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.605 13:06:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.605 13:06:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.605 ************************************ 00:07:29.605 START TEST default_locks 00:07:29.605 ************************************ 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75793 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75793 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75793 ']' 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.605 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.605 [2024-07-15 13:06:26.308096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:29.605 [2024-07-15 13:06:26.308310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75793 ] 00:07:29.863 [2024-07-15 13:06:26.459959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.863 [2024-07-15 13:06:26.560840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.843 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.843 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:30.843 13:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75793 00:07:30.843 13:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75793 00:07:30.843 13:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75793 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75793 ']' 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75793 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75793 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:31.101 killing process with pid 75793 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75793' 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75793 00:07:31.101 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75793 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75793 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75793 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75793 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75793 ']' 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 ERROR: process (pid: 75793) is no longer running 00:07:31.668 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75793) - No such process 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:31.668 00:07:31.668 real 0m2.050s 00:07:31.668 user 0m2.153s 00:07:31.668 sys 0m0.684s 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.668 13:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 ************************************ 00:07:31.668 END TEST default_locks 00:07:31.668 ************************************ 00:07:31.668 13:06:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:31.668 13:06:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.668 13:06:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.668 13:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 ************************************ 00:07:31.668 START TEST default_locks_via_rpc 00:07:31.668 ************************************ 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75846 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75846 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75846 ']' 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.668 13:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 [2024-07-15 13:06:28.395541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.668 [2024-07-15 13:06:28.395729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75846 ] 00:07:31.927 [2024-07-15 13:06:28.548287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.927 [2024-07-15 13:06:28.652400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.862 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.862 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75846 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75846 00:07:32.863 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75846 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75846 ']' 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75846 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75846 00:07:33.429 killing process with pid 75846 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75846' 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75846 00:07:33.429 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75846 00:07:33.687 00:07:33.687 real 0m2.124s 00:07:33.687 user 0m2.243s 00:07:33.687 sys 0m0.711s 00:07:33.687 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.687 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.687 ************************************ 00:07:33.687 END TEST default_locks_via_rpc 00:07:33.687 ************************************ 00:07:33.945 13:06:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:33.945 13:06:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.945 13:06:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.945 13:06:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.945 ************************************ 00:07:33.945 START TEST non_locking_app_on_locked_coremask 00:07:33.945 ************************************ 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75903 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75903 /var/tmp/spdk.sock 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75903 ']' 00:07:33.945 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.946 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.946 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.946 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.946 13:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.946 [2024-07-15 13:06:30.572295] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:33.946 [2024-07-15 13:06:30.572507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75903 ] 00:07:34.204 [2024-07-15 13:06:30.722333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.204 [2024-07-15 13:06:30.827807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.138 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.138 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:35.138 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75920 00:07:35.138 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:35.138 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75920 /var/tmp/spdk2.sock 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75920 ']' 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.139 13:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.139 [2024-07-15 13:06:31.679065] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:35.139 [2024-07-15 13:06:31.679686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75920 ] 00:07:35.139 [2024-07-15 13:06:31.839011] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.139 [2024-07-15 13:06:31.839106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.397 [2024-07-15 13:06:32.050559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.963 13:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.963 13:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:35.963 13:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75903 00:07:35.963 13:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75903 00:07:35.963 13:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75903 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75903 ']' 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75903 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75903 00:07:36.899 killing process with pid 75903 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75903' 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75903 00:07:36.899 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75903 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75920 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75920 ']' 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75920 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75920 00:07:37.833 killing process with pid 75920 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75920' 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75920 00:07:37.833 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75920 00:07:38.396 00:07:38.396 real 0m4.552s 00:07:38.396 user 0m5.001s 00:07:38.396 sys 0m1.372s 00:07:38.396 13:06:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.396 ************************************ 00:07:38.396 END TEST non_locking_app_on_locked_coremask 00:07:38.396 ************************************ 00:07:38.396 13:06:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.396 13:06:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:38.396 13:06:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:38.396 13:06:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.396 13:06:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.396 ************************************ 00:07:38.396 START TEST locking_app_on_unlocked_coremask 00:07:38.396 ************************************ 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:38.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75989 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75989 /var/tmp/spdk.sock 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75989 ']' 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:38.396 13:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.653 [2024-07-15 13:06:35.180208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:38.653 [2024-07-15 13:06:35.180414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75989 ] 00:07:38.653 [2024-07-15 13:06:35.323919] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.653 [2024-07-15 13:06:35.324008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.910 [2024-07-15 13:06:35.424920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76005 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76005 /var/tmp/spdk2.sock 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76005 ']' 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.475 13:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.731 [2024-07-15 13:06:36.232403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:39.731 [2024-07-15 13:06:36.233852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76005 ] 00:07:39.732 [2024-07-15 13:06:36.397446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.990 [2024-07-15 13:06:36.605208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.554 13:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.554 13:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:40.554 13:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76005 00:07:40.554 13:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76005 00:07:40.554 13:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75989 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75989 ']' 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75989 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75989 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:41.487 killing process with pid 75989 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75989' 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75989 00:07:41.487 13:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75989 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76005 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76005 ']' 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76005 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76005 00:07:42.442 killing process with pid 76005 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76005' 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76005 00:07:42.442 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76005 00:07:43.006 ************************************ 00:07:43.006 END TEST locking_app_on_unlocked_coremask 00:07:43.006 ************************************ 00:07:43.006 00:07:43.006 real 0m4.445s 00:07:43.006 user 0m4.872s 00:07:43.006 sys 0m1.363s 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 13:06:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:43.006 13:06:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.006 13:06:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.006 13:06:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 ************************************ 00:07:43.006 START TEST locking_app_on_locked_coremask 00:07:43.006 ************************************ 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76074 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76074 /var/tmp/spdk.sock 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76074 ']' 00:07:43.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.006 13:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 [2024-07-15 13:06:39.664079] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:43.006 [2024-07-15 13:06:39.665278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:07:43.263 [2024-07-15 13:06:39.807337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.263 [2024-07-15 13:06:39.909499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76100 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76100 /var/tmp/spdk2.sock 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76100 /var/tmp/spdk2.sock 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76100 /var/tmp/spdk2.sock 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76100 ']' 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.196 13:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.196 [2024-07-15 13:06:40.780736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:44.196 [2024-07-15 13:06:40.780905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76100 ] 00:07:44.455 [2024-07-15 13:06:40.934763] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76074 has claimed it. 00:07:44.455 [2024-07-15 13:06:40.934867] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:45.019 ERROR: process (pid: 76100) is no longer running 00:07:45.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76100) - No such process 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76074 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76074 00:07:45.019 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76074 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76074 ']' 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76074 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76074 00:07:45.277 killing process with pid 76074 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76074' 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76074 00:07:45.277 13:06:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76074 00:07:45.847 00:07:45.847 real 0m2.762s 00:07:45.847 user 0m3.177s 00:07:45.847 sys 0m0.739s 00:07:45.847 13:06:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.847 13:06:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.847 ************************************ 00:07:45.847 END TEST locking_app_on_locked_coremask 00:07:45.847 ************************************ 00:07:45.847 13:06:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:45.847 13:06:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.847 13:06:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.847 13:06:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.847 ************************************ 00:07:45.847 START TEST locking_overlapped_coremask 00:07:45.847 ************************************ 00:07:45.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76143 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76143 /var/tmp/spdk.sock 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76143 ']' 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.847 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.848 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.848 13:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.848 [2024-07-15 13:06:42.481599] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:45.848 [2024-07-15 13:06:42.482086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76143 ] 00:07:46.105 [2024-07-15 13:06:42.627901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.105 [2024-07-15 13:06:42.729245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.105 [2024-07-15 13:06:42.729329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.105 [2024-07-15 13:06:42.729399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76161 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76161 /var/tmp/spdk2.sock 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76161 /var/tmp/spdk2.sock 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:47.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76161 /var/tmp/spdk2.sock 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76161 ']' 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.041 13:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.041 [2024-07-15 13:06:43.623976] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:47.041 [2024-07-15 13:06:43.624202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:07:47.299 [2024-07-15 13:06:43.789518] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76143 has claimed it. 00:07:47.299 [2024-07-15 13:06:43.789635] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.558 ERROR: process (pid: 76161) is no longer running 00:07:47.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76161) - No such process 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76143 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76143 ']' 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76143 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.558 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76143 00:07:47.816 killing process with pid 76143 00:07:47.816 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.816 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.816 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76143' 00:07:47.816 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76143 00:07:47.816 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76143 00:07:48.073 ************************************ 00:07:48.073 END TEST locking_overlapped_coremask 00:07:48.073 ************************************ 00:07:48.073 00:07:48.073 real 0m2.399s 00:07:48.073 user 0m6.512s 00:07:48.073 sys 0m0.631s 00:07:48.073 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.073 13:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 13:06:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:48.331 13:06:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:48.331 13:06:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.331 13:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 ************************************ 00:07:48.331 START TEST locking_overlapped_coremask_via_rpc 00:07:48.331 ************************************ 00:07:48.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76209 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76209 /var/tmp/spdk.sock 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76209 ']' 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.331 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 [2024-07-15 13:06:44.933648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:48.331 [2024-07-15 13:06:44.933852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76209 ] 00:07:48.611 [2024-07-15 13:06:45.082274] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.611 [2024-07-15 13:06:45.082356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.611 [2024-07-15 13:06:45.188351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.611 [2024-07-15 13:06:45.188427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.611 [2024-07-15 13:06:45.188491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76227 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76227 /var/tmp/spdk2.sock 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76227 ']' 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.176 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.434 [2024-07-15 13:06:46.021631] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:49.434 [2024-07-15 13:06:46.022132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76227 ] 00:07:49.691 [2024-07-15 13:06:46.187559] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.691 [2024-07-15 13:06:46.187675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.948 [2024-07-15 13:06:46.462825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.948 [2024-07-15 13:06:46.462913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.948 [2024-07-15 13:06:46.462970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.513 [2024-07-15 13:06:47.152559] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76209 has claimed it. 00:07:50.513 request: 00:07:50.513 { 00:07:50.513 "method": "framework_enable_cpumask_locks", 00:07:50.513 "req_id": 1 00:07:50.513 } 00:07:50.513 Got JSON-RPC error response 00:07:50.513 response: 00:07:50.513 { 00:07:50.513 "code": -32603, 00:07:50.513 "message": "Failed to claim CPU core: 2" 00:07:50.513 } 00:07:50.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76209 /var/tmp/spdk.sock 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76209 ']' 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.513 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76227 /var/tmp/spdk2.sock 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76227 ']' 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.077 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:51.349 00:07:51.349 real 0m3.086s 00:07:51.349 user 0m1.583s 00:07:51.349 sys 0m0.237s 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.349 13:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.349 ************************************ 00:07:51.349 END TEST locking_overlapped_coremask_via_rpc 00:07:51.349 ************************************ 00:07:51.349 13:06:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:51.349 13:06:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76209 ]] 00:07:51.349 13:06:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76209 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76209 ']' 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76209 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76209 00:07:51.349 killing process with pid 76209 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76209' 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76209 00:07:51.349 13:06:47 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76209 00:07:51.947 13:06:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76227 ]] 00:07:51.947 13:06:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76227 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76227 ']' 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76227 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76227 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76227' 00:07:51.947 killing process with pid 76227 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76227 00:07:51.947 13:06:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76227 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76209 ]] 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76209 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76209 ']' 00:07:52.511 Process with pid 76209 is not found 00:07:52.511 Process with pid 76227 is not found 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76209 00:07:52.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76209) - No such process 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76209 is not found' 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76227 ]] 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76227 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76227 ']' 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76227 00:07:52.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76227) - No such process 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76227 is not found' 00:07:52.511 13:06:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.511 ************************************ 00:07:52.511 END TEST cpu_locks 00:07:52.511 ************************************ 00:07:52.511 00:07:52.511 real 0m23.096s 00:07:52.511 user 0m40.582s 00:07:52.511 sys 0m6.878s 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.511 13:06:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.511 ************************************ 00:07:52.511 END TEST event 00:07:52.511 ************************************ 00:07:52.512 00:07:52.512 real 0m53.060s 00:07:52.512 user 1m43.055s 00:07:52.512 sys 0m10.896s 00:07:52.512 13:06:49 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.512 13:06:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.769 13:06:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.769 13:06:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.769 13:06:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.769 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.769 ************************************ 00:07:52.769 START TEST thread 00:07:52.769 ************************************ 00:07:52.769 13:06:49 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.769 * Looking for test storage... 00:07:52.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:52.769 13:06:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.769 13:06:49 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:52.769 13:06:49 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.769 13:06:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.769 ************************************ 00:07:52.769 START TEST thread_poller_perf 00:07:52.769 ************************************ 00:07:52.769 13:06:49 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.769 [2024-07-15 13:06:49.386420] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:52.769 [2024-07-15 13:06:49.387139] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76362 ] 00:07:53.026 [2024-07-15 13:06:49.534384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.026 [2024-07-15 13:06:49.640303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.026 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:54.395 ====================================== 00:07:54.395 busy:2215321723 (cyc) 00:07:54.395 total_run_count: 291000 00:07:54.395 tsc_hz: 2200000000 (cyc) 00:07:54.395 ====================================== 00:07:54.395 poller_cost: 7612 (cyc), 3460 (nsec) 00:07:54.395 00:07:54.395 real 0m1.393s 00:07:54.395 user 0m1.192s 00:07:54.395 sys 0m0.091s 00:07:54.395 13:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.395 13:06:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 ************************************ 00:07:54.395 END TEST thread_poller_perf 00:07:54.395 ************************************ 00:07:54.395 13:06:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.395 13:06:50 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:54.395 13:06:50 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.395 13:06:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 ************************************ 00:07:54.395 START TEST thread_poller_perf 00:07:54.395 ************************************ 00:07:54.395 13:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.395 [2024-07-15 13:06:50.838601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:54.395 [2024-07-15 13:06:50.839190] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76393 ] 00:07:54.395 [2024-07-15 13:06:50.991003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.395 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:54.395 [2024-07-15 13:06:51.090013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.766 ====================================== 00:07:55.766 busy:2204254652 (cyc) 00:07:55.766 total_run_count: 3720000 00:07:55.766 tsc_hz: 2200000000 (cyc) 00:07:55.766 ====================================== 00:07:55.766 poller_cost: 592 (cyc), 269 (nsec) 00:07:55.766 00:07:55.766 real 0m1.392s 00:07:55.766 user 0m1.185s 00:07:55.766 sys 0m0.098s 00:07:55.766 13:06:52 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.766 ************************************ 00:07:55.766 END TEST thread_poller_perf 00:07:55.766 ************************************ 00:07:55.766 13:06:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.766 13:06:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:55.766 ************************************ 00:07:55.766 END TEST thread 00:07:55.766 ************************************ 00:07:55.766 00:07:55.766 real 0m2.961s 00:07:55.766 user 0m2.442s 00:07:55.766 sys 0m0.295s 00:07:55.766 13:06:52 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.766 13:06:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.766 13:06:52 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:55.767 13:06:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:55.767 13:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.767 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:55.767 ************************************ 00:07:55.767 START TEST accel 00:07:55.767 ************************************ 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:55.767 * Looking for test storage... 00:07:55.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:55.767 13:06:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:55.767 13:06:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:55.767 13:06:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:55.767 13:06:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76474 00:07:55.767 13:06:52 accel -- accel/accel.sh@63 -- # waitforlisten 76474 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@827 -- # '[' -z 76474 ']' 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.767 13:06:52 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.767 13:06:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.767 13:06:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.767 13:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.767 13:06:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.767 13:06:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.767 13:06:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.767 13:06:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.767 13:06:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:55.767 13:06:52 accel -- accel/accel.sh@41 -- # jq -r . 00:07:55.767 [2024-07-15 13:06:52.475715] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:55.767 [2024-07-15 13:06:52.476337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76474 ] 00:07:56.028 [2024-07-15 13:06:52.618561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.028 [2024-07-15 13:06:52.715895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.978 13:06:53 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.978 13:06:53 accel -- common/autotest_common.sh@860 -- # return 0 00:07:56.978 13:06:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:56.978 13:06:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:56.978 13:06:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:56.978 13:06:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:56.979 13:06:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:56.979 13:06:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.979 13:06:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.979 13:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.979 13:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.979 13:06:53 accel -- accel/accel.sh@75 -- # killprocess 76474 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@946 -- # '[' -z 76474 ']' 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@950 -- # kill -0 76474 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@951 -- # uname 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76474 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76474' 00:07:56.979 killing process with pid 76474 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@965 -- # kill 76474 00:07:56.979 13:06:53 accel -- common/autotest_common.sh@970 -- # wait 76474 00:07:57.237 13:06:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:57.237 13:06:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:57.237 13:06:53 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:57.237 13:06:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.237 13:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.237 13:06:53 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:57.237 13:06:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:57.495 13:06:54 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.495 13:06:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:57.495 13:06:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:57.495 13:06:54 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:57.495 13:06:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.495 13:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.495 ************************************ 00:07:57.495 START TEST accel_missing_filename 00:07:57.495 ************************************ 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.495 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:57.495 13:06:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:57.495 [2024-07-15 13:06:54.108244] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:57.495 [2024-07-15 13:06:54.108502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76528 ] 00:07:57.754 [2024-07-15 13:06:54.261877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.754 [2024-07-15 13:06:54.358674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.754 [2024-07-15 13:06:54.417621] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.012 [2024-07-15 13:06:54.500918] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:58.012 A filename is required. 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.012 00:07:58.012 real 0m0.548s 00:07:58.012 user 0m0.316s 00:07:58.012 sys 0m0.166s 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.012 ************************************ 00:07:58.012 END TEST accel_missing_filename 00:07:58.012 ************************************ 00:07:58.012 13:06:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:58.012 13:06:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.012 13:06:54 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:58.012 13:06:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.012 13:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.012 ************************************ 00:07:58.012 START TEST accel_compress_verify 00:07:58.012 ************************************ 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.012 13:06:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:58.012 13:06:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:58.012 [2024-07-15 13:06:54.694859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.012 [2024-07-15 13:06:54.695043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76553 ] 00:07:58.269 [2024-07-15 13:06:54.839790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.269 [2024-07-15 13:06:54.935303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.269 [2024-07-15 13:06:54.992998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.527 [2024-07-15 13:06:55.076206] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:58.527 00:07:58.527 Compression does not support the verify option, aborting. 00:07:58.527 ************************************ 00:07:58.527 END TEST accel_compress_verify 00:07:58.527 ************************************ 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.527 00:07:58.527 real 0m0.528s 00:07:58.527 user 0m0.319s 00:07:58.527 sys 0m0.148s 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.527 13:06:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 13:06:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:58.527 13:06:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:58.527 13:06:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.527 13:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 ************************************ 00:07:58.527 START TEST accel_wrong_workload 00:07:58.527 ************************************ 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.527 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:58.527 13:06:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:58.785 Unsupported workload type: foobar 00:07:58.785 [2024-07-15 13:06:55.277696] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:58.785 accel_perf options: 00:07:58.785 [-h help message] 00:07:58.785 [-q queue depth per core] 00:07:58.785 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:58.785 [-T number of threads per core 00:07:58.785 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:58.785 [-t time in seconds] 00:07:58.785 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:58.785 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:58.785 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:58.785 [-l for compress/decompress workloads, name of uncompressed input file 00:07:58.785 [-S for crc32c workload, use this seed value (default 0) 00:07:58.785 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:58.785 [-f for fill workload, use this BYTE value (default 255) 00:07:58.785 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:58.785 [-y verify result if this switch is on] 00:07:58.785 [-a tasks to allocate per core (default: same value as -q)] 00:07:58.785 Can be used to spread operations across a wider range of memory. 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.785 00:07:58.785 real 0m0.064s 00:07:58.785 user 0m0.080s 00:07:58.785 sys 0m0.035s 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.785 13:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:58.785 ************************************ 00:07:58.785 END TEST accel_wrong_workload 00:07:58.785 ************************************ 00:07:58.785 13:06:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:58.785 13:06:55 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:58.785 13:06:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.785 13:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.785 ************************************ 00:07:58.785 START TEST accel_negative_buffers 00:07:58.785 ************************************ 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.785 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.785 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:58.786 13:06:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:58.786 -x option must be non-negative. 00:07:58.786 [2024-07-15 13:06:55.386403] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:58.786 accel_perf options: 00:07:58.786 [-h help message] 00:07:58.786 [-q queue depth per core] 00:07:58.786 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:58.786 [-T number of threads per core 00:07:58.786 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:58.786 [-t time in seconds] 00:07:58.786 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:58.786 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:58.786 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:58.786 [-l for compress/decompress workloads, name of uncompressed input file 00:07:58.786 [-S for crc32c workload, use this seed value (default 0) 00:07:58.786 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:58.786 [-f for fill workload, use this BYTE value (default 255) 00:07:58.786 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:58.786 [-y verify result if this switch is on] 00:07:58.786 [-a tasks to allocate per core (default: same value as -q)] 00:07:58.786 Can be used to spread operations across a wider range of memory. 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.786 00:07:58.786 real 0m0.066s 00:07:58.786 user 0m0.075s 00:07:58.786 sys 0m0.034s 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.786 13:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:58.786 ************************************ 00:07:58.786 END TEST accel_negative_buffers 00:07:58.786 ************************************ 00:07:58.786 13:06:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:58.786 13:06:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:58.786 13:06:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.786 13:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.786 ************************************ 00:07:58.786 START TEST accel_crc32c 00:07:58.786 ************************************ 00:07:58.786 13:06:55 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:58.786 13:06:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:58.786 [2024-07-15 13:06:55.504951] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.786 [2024-07-15 13:06:55.505255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76615 ] 00:07:59.044 [2024-07-15 13:06:55.654062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.044 [2024-07-15 13:06:55.759660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.303 13:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:00.675 13:06:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.675 00:08:00.675 real 0m1.553s 00:08:00.675 user 0m0.013s 00:08:00.675 sys 0m0.004s 00:08:00.675 13:06:57 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.675 13:06:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:00.675 ************************************ 00:08:00.675 END TEST accel_crc32c 00:08:00.675 ************************************ 00:08:00.675 13:06:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:00.675 13:06:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:00.675 13:06:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.675 13:06:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.675 ************************************ 00:08:00.675 START TEST accel_crc32c_C2 00:08:00.675 ************************************ 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.675 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:00.676 [2024-07-15 13:06:57.103918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:00.676 [2024-07-15 13:06:57.104140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76656 ] 00:08:00.676 [2024-07-15 13:06:57.255133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.676 [2024-07-15 13:06:57.359617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.934 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.935 13:06:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.306 00:08:02.306 real 0m1.562s 00:08:02.306 user 0m1.304s 00:08:02.306 sys 0m0.160s 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.306 ************************************ 00:08:02.306 END TEST accel_crc32c_C2 00:08:02.306 ************************************ 00:08:02.306 13:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:02.306 13:06:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:02.306 13:06:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:02.306 13:06:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.306 13:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.306 ************************************ 00:08:02.306 START TEST accel_copy 00:08:02.306 ************************************ 00:08:02.306 13:06:58 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:02.306 13:06:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:02.306 13:06:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:02.306 13:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.306 13:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:02.307 13:06:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:02.307 [2024-07-15 13:06:58.712852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:02.307 [2024-07-15 13:06:58.713046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76687 ] 00:08:02.307 [2024-07-15 13:06:58.865244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.307 [2024-07-15 13:06:58.972773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.307 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.565 13:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:03.499 13:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.499 00:08:03.500 real 0m1.571s 00:08:03.500 user 0m1.301s 00:08:03.500 sys 0m0.170s 00:08:03.500 13:07:00 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.500 ************************************ 00:08:03.500 END TEST accel_copy 00:08:03.500 ************************************ 00:08:03.500 13:07:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.759 13:07:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:03.759 13:07:00 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:03.759 13:07:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.759 13:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.759 ************************************ 00:08:03.759 START TEST accel_fill 00:08:03.759 ************************************ 00:08:03.759 13:07:00 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:03.759 13:07:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:03.759 [2024-07-15 13:07:00.330004] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.759 [2024-07-15 13:07:00.330236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76727 ] 00:08:03.759 [2024-07-15 13:07:00.475849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.019 [2024-07-15 13:07:00.577271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:04.019 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:04.020 13:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:05.405 13:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.405 00:08:05.405 real 0m1.548s 00:08:05.405 user 0m0.015s 00:08:05.405 sys 0m0.003s 00:08:05.405 13:07:01 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.405 13:07:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:05.405 ************************************ 00:08:05.405 END TEST accel_fill 00:08:05.405 ************************************ 00:08:05.405 13:07:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:05.405 13:07:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:05.405 13:07:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.405 13:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.405 ************************************ 00:08:05.405 START TEST accel_copy_crc32c 00:08:05.405 ************************************ 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:05.405 13:07:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:05.405 [2024-07-15 13:07:01.936950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:05.405 [2024-07-15 13:07:01.937232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76761 ] 00:08:05.405 [2024-07-15 13:07:02.090361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.662 [2024-07-15 13:07:02.195700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.662 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.663 13:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.043 00:08:07.043 real 0m1.564s 00:08:07.043 user 0m0.018s 00:08:07.043 sys 0m0.001s 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.043 ************************************ 00:08:07.043 END TEST accel_copy_crc32c 00:08:07.043 ************************************ 00:08:07.043 13:07:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:07.043 13:07:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:07.043 13:07:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:07.043 13:07:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.043 13:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.043 ************************************ 00:08:07.043 START TEST accel_copy_crc32c_C2 00:08:07.043 ************************************ 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:07.043 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:07.043 [2024-07-15 13:07:03.540330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:07.043 [2024-07-15 13:07:03.540505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76798 ] 00:08:07.043 [2024-07-15 13:07:03.684622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.301 [2024-07-15 13:07:03.786720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.301 13:07:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.698 00:08:08.698 real 0m1.541s 00:08:08.698 user 0m1.295s 00:08:08.698 sys 0m0.155s 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.698 13:07:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:08.698 ************************************ 00:08:08.698 END TEST accel_copy_crc32c_C2 00:08:08.698 ************************************ 00:08:08.698 13:07:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:08.698 13:07:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:08.698 13:07:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.698 13:07:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.698 ************************************ 00:08:08.698 START TEST accel_dualcast 00:08:08.698 ************************************ 00:08:08.698 13:07:05 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:08.698 13:07:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:08.698 [2024-07-15 13:07:05.138590] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:08.698 [2024-07-15 13:07:05.138786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76833 ] 00:08:08.698 [2024-07-15 13:07:05.333391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.956 [2024-07-15 13:07:05.450339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:08.956 13:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:10.330 13:07:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.330 00:08:10.330 real 0m1.608s 00:08:10.330 user 0m0.014s 00:08:10.330 sys 0m0.002s 00:08:10.330 13:07:06 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.330 ************************************ 00:08:10.330 END TEST accel_dualcast 00:08:10.330 ************************************ 00:08:10.330 13:07:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:10.330 13:07:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:10.330 13:07:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:10.330 13:07:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.330 13:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.330 ************************************ 00:08:10.330 START TEST accel_compare 00:08:10.330 ************************************ 00:08:10.330 13:07:06 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:10.330 13:07:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:10.330 [2024-07-15 13:07:06.801409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:10.330 [2024-07-15 13:07:06.801612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76869 ] 00:08:10.330 [2024-07-15 13:07:06.952808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.589 [2024-07-15 13:07:07.073029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:10.589 13:07:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.662 ************************************ 00:08:11.662 END TEST accel_compare 00:08:11.662 ************************************ 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:11.662 13:07:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.662 00:08:11.662 real 0m1.583s 00:08:11.662 user 0m0.016s 00:08:11.663 sys 0m0.002s 00:08:11.663 13:07:08 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.663 13:07:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:11.924 13:07:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:11.924 13:07:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:11.924 13:07:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.924 13:07:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.924 ************************************ 00:08:11.924 START TEST accel_xor 00:08:11.924 ************************************ 00:08:11.924 13:07:08 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:11.924 13:07:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:11.924 [2024-07-15 13:07:08.439475] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:11.924 [2024-07-15 13:07:08.439768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76910 ] 00:08:11.924 [2024-07-15 13:07:08.595916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.184 [2024-07-15 13:07:08.697108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.184 13:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:13.559 13:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.559 00:08:13.559 real 0m1.565s 00:08:13.559 user 0m1.302s 00:08:13.559 sys 0m0.171s 00:08:13.559 13:07:09 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.559 13:07:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 ************************************ 00:08:13.559 END TEST accel_xor 00:08:13.559 ************************************ 00:08:13.559 13:07:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:13.559 13:07:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:13.559 13:07:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.559 13:07:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 ************************************ 00:08:13.559 START TEST accel_xor 00:08:13.559 ************************************ 00:08:13.559 13:07:10 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.559 13:07:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.560 13:07:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.560 13:07:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:13.560 13:07:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:13.560 [2024-07-15 13:07:10.047688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:13.560 [2024-07-15 13:07:10.047903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76940 ] 00:08:13.560 [2024-07-15 13:07:10.197491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.818 [2024-07-15 13:07:10.304376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:13.818 13:07:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:15.191 13:07:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.191 00:08:15.191 real 0m1.559s 00:08:15.191 user 0m1.303s 00:08:15.191 sys 0m0.160s 00:08:15.191 13:07:11 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.191 13:07:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:15.191 ************************************ 00:08:15.191 END TEST accel_xor 00:08:15.191 ************************************ 00:08:15.191 13:07:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:15.191 13:07:11 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:15.191 13:07:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.191 13:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.191 ************************************ 00:08:15.191 START TEST accel_dif_verify 00:08:15.191 ************************************ 00:08:15.191 13:07:11 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.191 13:07:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.192 13:07:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.192 13:07:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.192 13:07:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:15.192 13:07:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:15.192 [2024-07-15 13:07:11.660455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:15.192 [2024-07-15 13:07:11.660713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76981 ] 00:08:15.192 [2024-07-15 13:07:11.811971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.192 [2024-07-15 13:07:11.916043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.467 13:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.434 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:16.435 13:07:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.435 00:08:16.435 real 0m1.558s 00:08:16.435 user 0m1.290s 00:08:16.435 sys 0m0.169s 00:08:16.435 13:07:13 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.435 13:07:13 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.435 ************************************ 00:08:16.435 END TEST accel_dif_verify 00:08:16.435 ************************************ 00:08:16.691 13:07:13 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:16.691 13:07:13 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:16.691 13:07:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.691 13:07:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.691 ************************************ 00:08:16.691 START TEST accel_dif_generate 00:08:16.691 ************************************ 00:08:16.691 13:07:13 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:16.691 13:07:13 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:16.691 [2024-07-15 13:07:13.259199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:16.691 [2024-07-15 13:07:13.259383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77011 ] 00:08:16.691 [2024-07-15 13:07:13.403621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.948 [2024-07-15 13:07:13.502839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.948 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.949 13:07:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:18.321 13:07:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.321 00:08:18.321 real 0m1.538s 00:08:18.321 user 0m1.292s 00:08:18.321 sys 0m0.154s 00:08:18.321 13:07:14 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.321 13:07:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:18.321 ************************************ 00:08:18.321 END TEST accel_dif_generate 00:08:18.321 ************************************ 00:08:18.321 13:07:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:18.321 13:07:14 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:18.321 13:07:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.321 13:07:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.321 ************************************ 00:08:18.321 START TEST accel_dif_generate_copy 00:08:18.321 ************************************ 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:18.321 13:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:18.321 [2024-07-15 13:07:14.848910] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:18.321 [2024-07-15 13:07:14.849105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77052 ] 00:08:18.321 [2024-07-15 13:07:14.994704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.580 [2024-07-15 13:07:15.099678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.580 13:07:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.954 00:08:19.954 real 0m1.550s 00:08:19.954 user 0m1.283s 00:08:19.954 sys 0m0.172s 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.954 13:07:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.954 ************************************ 00:08:19.954 END TEST accel_dif_generate_copy 00:08:19.954 ************************************ 00:08:19.954 13:07:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:19.954 13:07:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.954 13:07:16 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:19.954 13:07:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.954 13:07:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.954 ************************************ 00:08:19.954 START TEST accel_comp 00:08:19.954 ************************************ 00:08:19.954 13:07:16 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:19.954 13:07:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:19.954 [2024-07-15 13:07:16.446788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:19.954 [2024-07-15 13:07:16.447000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77088 ] 00:08:19.954 [2024-07-15 13:07:16.601944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.212 [2024-07-15 13:07:16.713886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 13:07:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:21.585 13:07:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.585 00:08:21.585 real 0m1.581s 00:08:21.585 user 0m1.313s 00:08:21.585 sys 0m0.173s 00:08:21.585 13:07:17 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.585 ************************************ 00:08:21.585 END TEST accel_comp 00:08:21.585 ************************************ 00:08:21.585 13:07:17 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:21.585 13:07:18 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:21.585 13:07:18 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:21.585 13:07:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.585 13:07:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.585 ************************************ 00:08:21.585 START TEST accel_decomp 00:08:21.585 ************************************ 00:08:21.585 13:07:18 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:21.585 13:07:18 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:21.585 [2024-07-15 13:07:18.090623] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:21.585 [2024-07-15 13:07:18.090824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77123 ] 00:08:21.585 [2024-07-15 13:07:18.242344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.843 [2024-07-15 13:07:18.344686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.843 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.844 13:07:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.216 ************************************ 00:08:23.216 13:07:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.216 00:08:23.216 real 0m1.564s 00:08:23.216 user 0m1.294s 00:08:23.216 sys 0m0.177s 00:08:23.216 13:07:19 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.216 13:07:19 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:23.216 END TEST accel_decomp 00:08:23.216 ************************************ 00:08:23.216 13:07:19 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:23.216 13:07:19 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:23.216 13:07:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.216 13:07:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.216 ************************************ 00:08:23.216 START TEST accel_decmop_full 00:08:23.216 ************************************ 00:08:23.216 13:07:19 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:23.216 13:07:19 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:23.216 [2024-07-15 13:07:19.719234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:23.216 [2024-07-15 13:07:19.719574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77159 ] 00:08:23.216 [2024-07-15 13:07:19.875300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.474 [2024-07-15 13:07:19.979455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.474 13:07:20 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.845 13:07:21 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.845 00:08:24.845 real 0m1.596s 00:08:24.845 user 0m0.015s 00:08:24.845 sys 0m0.001s 00:08:24.845 13:07:21 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.845 ************************************ 00:08:24.845 END TEST accel_decmop_full 00:08:24.845 ************************************ 00:08:24.845 13:07:21 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:24.845 13:07:21 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:24.845 13:07:21 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:24.845 13:07:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.845 13:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.845 ************************************ 00:08:24.845 START TEST accel_decomp_mcore 00:08:24.845 ************************************ 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.845 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.845 [2024-07-15 13:07:21.351972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:24.845 [2024-07-15 13:07:21.352188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77194 ] 00:08:24.845 [2024-07-15 13:07:21.496473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.102 [2024-07-15 13:07:21.601956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.102 [2024-07-15 13:07:21.602060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.102 [2024-07-15 13:07:21.602098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.102 [2024-07-15 13:07:21.602162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.102 13:07:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 ************************************ 00:08:26.473 END TEST accel_decomp_mcore 00:08:26.473 ************************************ 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.473 00:08:26.473 real 0m1.570s 00:08:26.473 user 0m0.010s 00:08:26.473 sys 0m0.003s 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.473 13:07:22 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:26.473 13:07:22 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.473 13:07:22 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:26.473 13:07:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.473 13:07:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.473 ************************************ 00:08:26.473 START TEST accel_decomp_full_mcore 00:08:26.473 ************************************ 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:26.473 13:07:22 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:26.473 [2024-07-15 13:07:22.975901] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:26.473 [2024-07-15 13:07:22.976298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77233 ] 00:08:26.473 [2024-07-15 13:07:23.123655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.730 [2024-07-15 13:07:23.228626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.730 [2024-07-15 13:07:23.228778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.730 [2024-07-15 13:07:23.228879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.730 [2024-07-15 13:07:23.228936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.730 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.731 13:07:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 ************************************ 00:08:28.100 END TEST accel_decomp_full_mcore 00:08:28.100 ************************************ 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.100 00:08:28.100 real 0m1.600s 00:08:28.100 user 0m0.015s 00:08:28.100 sys 0m0.004s 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.100 13:07:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 13:07:24 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.100 13:07:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:28.100 13:07:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.100 13:07:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 ************************************ 00:08:28.100 START TEST accel_decomp_mthread 00:08:28.100 ************************************ 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:28.100 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:28.101 [2024-07-15 13:07:24.611748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:28.101 [2024-07-15 13:07:24.611923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77277 ] 00:08:28.101 [2024-07-15 13:07:24.754628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.359 [2024-07-15 13:07:24.852287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.359 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.360 13:07:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.731 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.731 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.731 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.732 00:08:29.732 real 0m1.620s 00:08:29.732 user 0m1.364s 00:08:29.732 sys 0m0.162s 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.732 13:07:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:29.732 ************************************ 00:08:29.732 END TEST accel_decomp_mthread 00:08:29.732 ************************************ 00:08:29.732 13:07:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.732 13:07:26 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:29.732 13:07:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.732 13:07:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.732 ************************************ 00:08:29.732 START TEST accel_decomp_full_mthread 00:08:29.732 ************************************ 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:29.732 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:29.732 [2024-07-15 13:07:26.299964] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:29.732 [2024-07-15 13:07:26.300307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77311 ] 00:08:29.732 [2024-07-15 13:07:26.453465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.989 [2024-07-15 13:07:26.610948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.989 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.990 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.247 13:07:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.618 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.619 ************************************ 00:08:31.619 END TEST accel_decomp_full_mthread 00:08:31.619 ************************************ 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.619 00:08:31.619 real 0m1.729s 00:08:31.619 user 0m1.389s 00:08:31.619 sys 0m0.240s 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.619 13:07:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:31.619 13:07:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:31.619 13:07:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:31.619 13:07:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.619 13:07:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.619 13:07:28 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:31.619 13:07:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.619 13:07:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.619 13:07:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.619 13:07:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.619 13:07:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.619 13:07:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.619 13:07:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:31.619 13:07:28 accel -- accel/accel.sh@41 -- # jq -r . 00:08:31.619 ************************************ 00:08:31.619 START TEST accel_dif_functional_tests 00:08:31.619 ************************************ 00:08:31.619 13:07:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.619 [2024-07-15 13:07:28.126917] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:31.619 [2024-07-15 13:07:28.127162] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77349 ] 00:08:31.619 [2024-07-15 13:07:28.277322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.877 [2024-07-15 13:07:28.384897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.877 [2024-07-15 13:07:28.384936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.877 [2024-07-15 13:07:28.384898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.877 00:08:31.877 00:08:31.877 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.877 http://cunit.sourceforge.net/ 00:08:31.877 00:08:31.877 00:08:31.877 Suite: accel_dif 00:08:31.877 Test: verify: DIF generated, GUARD check ...passed 00:08:31.877 Test: verify: DIF generated, APPTAG check ...passed 00:08:31.877 Test: verify: DIF generated, REFTAG check ...passed 00:08:31.877 Test: verify: DIF not generated, GUARD check ...passed 00:08:31.877 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:07:28.481978] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.877 passed 00:08:31.877 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:07:28.482184] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.877 passed 00:08:31.878 Test: verify: APPTAG correct, APPTAG check ...[2024-07-15 13:07:28.482349] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.878 passed 00:08:31.878 Test: verify: APPTAG incorrect, APPTAG check ...passed[2024-07-15 13:07:28.482513] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:31.878 00:08:31.878 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:31.878 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:31.878 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:31.878 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:31.878 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 13:07:28.482916] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:31.878 passed 00:08:31.878 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:31.878 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:31.878 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:07:28.483349] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.878 passed 00:08:31.878 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:07:28.483541] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.878 passed 00:08:31.878 Test: verify copy: DIF not generated, REFTAG check ...passed 00:08:31.878 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 13:07:28.483656] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.878 passed 00:08:31.878 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:31.878 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:31.878 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:31.878 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:31.878 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:31.878 Test: generate copy: iovecs-len validate ...[2024-07-15 13:07:28.484437] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:31.878 passed 00:08:31.878 Test: generate copy: buffer alignment validate ...passed 00:08:31.878 00:08:31.878 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.878 suites 1 1 n/a 0 0 00:08:31.878 tests 26 26 26 0 0 00:08:31.878 asserts 115 115 115 0 n/a 00:08:31.878 00:08:31.878 Elapsed time = 0.007 seconds 00:08:32.137 ************************************ 00:08:32.137 END TEST accel_dif_functional_tests 00:08:32.137 ************************************ 00:08:32.137 00:08:32.137 real 0m0.717s 00:08:32.137 user 0m0.856s 00:08:32.137 sys 0m0.247s 00:08:32.137 13:07:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.137 13:07:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:32.137 00:08:32.137 real 0m36.512s 00:08:32.137 user 0m37.005s 00:08:32.137 sys 0m5.355s 00:08:32.137 13:07:28 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.137 ************************************ 00:08:32.137 END TEST accel 00:08:32.137 ************************************ 00:08:32.137 13:07:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.137 13:07:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:32.137 13:07:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:32.137 13:07:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.137 13:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:32.137 ************************************ 00:08:32.137 START TEST accel_rpc 00:08:32.137 ************************************ 00:08:32.137 13:07:28 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:32.395 * Looking for test storage... 00:08:32.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:32.395 13:07:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:32.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.395 13:07:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77420 00:08:32.395 13:07:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77420 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 77420 ']' 00:08:32.395 13:07:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:32.395 13:07:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.395 [2024-07-15 13:07:29.045336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:32.395 [2024-07-15 13:07:29.045533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77420 ] 00:08:32.652 [2024-07-15 13:07:29.202880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.652 [2024-07-15 13:07:29.320449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 ************************************ 00:08:33.636 START TEST accel_assign_opcode 00:08:33.636 ************************************ 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 [2024-07-15 13:07:30.025683] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 [2024-07-15 13:07:30.033594] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.636 software 00:08:33.636 ************************************ 00:08:33.636 END TEST accel_assign_opcode 00:08:33.636 ************************************ 00:08:33.636 00:08:33.636 real 0m0.299s 00:08:33.636 user 0m0.048s 00:08:33.636 sys 0m0.010s 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.636 13:07:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:33.636 13:07:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77420 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 77420 ']' 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 77420 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:33.636 13:07:30 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77420 00:08:33.894 killing process with pid 77420 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77420' 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@965 -- # kill 77420 00:08:33.894 13:07:30 accel_rpc -- common/autotest_common.sh@970 -- # wait 77420 00:08:34.151 ************************************ 00:08:34.151 END TEST accel_rpc 00:08:34.151 ************************************ 00:08:34.151 00:08:34.151 real 0m2.008s 00:08:34.151 user 0m2.034s 00:08:34.151 sys 0m0.563s 00:08:34.151 13:07:30 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.151 13:07:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.151 13:07:30 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.151 13:07:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:34.151 13:07:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.151 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.408 ************************************ 00:08:34.408 START TEST app_cmdline 00:08:34.408 ************************************ 00:08:34.408 13:07:30 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.408 * Looking for test storage... 00:08:34.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:34.408 13:07:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:34.408 13:07:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77514 00:08:34.408 13:07:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77514 00:08:34.409 13:07:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77514 ']' 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:34.409 13:07:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.409 [2024-07-15 13:07:31.078926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:34.409 [2024-07-15 13:07:31.079366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77514 ] 00:08:34.666 [2024-07-15 13:07:31.261252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.666 [2024-07-15 13:07:31.366161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.598 13:07:31 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:35.598 13:07:31 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:35.598 13:07:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:35.598 { 00:08:35.598 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:35.598 "fields": { 00:08:35.598 "major": 24, 00:08:35.598 "minor": 5, 00:08:35.598 "patch": 1, 00:08:35.598 "suffix": "-pre", 00:08:35.598 "commit": "5fa2f5086" 00:08:35.598 } 00:08:35.598 } 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:35.598 13:07:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:35.598 13:07:32 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.855 request: 00:08:35.855 { 00:08:35.855 "method": "env_dpdk_get_mem_stats", 00:08:35.855 "req_id": 1 00:08:35.855 } 00:08:35.855 Got JSON-RPC error response 00:08:35.855 response: 00:08:35.855 { 00:08:35.855 "code": -32601, 00:08:35.855 "message": "Method not found" 00:08:35.855 } 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.855 13:07:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77514 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77514 ']' 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77514 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77514 00:08:35.855 killing process with pid 77514 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77514' 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@965 -- # kill 77514 00:08:35.855 13:07:32 app_cmdline -- common/autotest_common.sh@970 -- # wait 77514 00:08:36.420 00:08:36.420 real 0m2.062s 00:08:36.420 user 0m2.435s 00:08:36.420 sys 0m0.526s 00:08:36.420 13:07:32 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.420 ************************************ 00:08:36.420 END TEST app_cmdline 00:08:36.420 ************************************ 00:08:36.420 13:07:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 13:07:32 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:36.420 13:07:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:36.420 13:07:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.420 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 ************************************ 00:08:36.420 START TEST version 00:08:36.420 ************************************ 00:08:36.420 13:07:33 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:36.420 * Looking for test storage... 00:08:36.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:36.420 13:07:33 version -- app/version.sh@17 -- # get_header_version major 00:08:36.420 13:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:36.420 13:07:33 version -- app/version.sh@14 -- # cut -f2 00:08:36.420 13:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:36.421 13:07:33 version -- app/version.sh@17 -- # major=24 00:08:36.421 13:07:33 version -- app/version.sh@18 -- # get_header_version minor 00:08:36.421 13:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # cut -f2 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:36.421 13:07:33 version -- app/version.sh@18 -- # minor=5 00:08:36.421 13:07:33 version -- app/version.sh@19 -- # get_header_version patch 00:08:36.421 13:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # cut -f2 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:36.421 13:07:33 version -- app/version.sh@19 -- # patch=1 00:08:36.421 13:07:33 version -- app/version.sh@20 -- # get_header_version suffix 00:08:36.421 13:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # cut -f2 00:08:36.421 13:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:36.421 13:07:33 version -- app/version.sh@20 -- # suffix=-pre 00:08:36.421 13:07:33 version -- app/version.sh@22 -- # version=24.5 00:08:36.421 13:07:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:36.421 13:07:33 version -- app/version.sh@25 -- # version=24.5.1 00:08:36.421 13:07:33 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:36.421 13:07:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:36.421 13:07:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:36.421 13:07:33 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:36.421 13:07:33 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:36.421 00:08:36.421 real 0m0.142s 00:08:36.421 user 0m0.084s 00:08:36.421 sys 0m0.087s 00:08:36.421 13:07:33 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.421 ************************************ 00:08:36.421 END TEST version 00:08:36.421 ************************************ 00:08:36.421 13:07:33 version -- common/autotest_common.sh@10 -- # set +x 00:08:36.678 13:07:33 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:36.678 13:07:33 -- spdk/autotest.sh@198 -- # uname -s 00:08:36.678 13:07:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:36.678 13:07:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:36.678 13:07:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:36.678 13:07:33 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:36.678 13:07:33 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:36.678 13:07:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:36.678 13:07:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.678 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:36.678 ************************************ 00:08:36.678 START TEST blockdev_nvme 00:08:36.678 ************************************ 00:08:36.678 13:07:33 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:36.678 * Looking for test storage... 00:08:36.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:36.678 13:07:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:36.678 13:07:33 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:36.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77659 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77659 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77659 ']' 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.679 13:07:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:36.679 13:07:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.679 [2024-07-15 13:07:33.406303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:36.679 [2024-07-15 13:07:33.406482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77659 ] 00:08:36.942 [2024-07-15 13:07:33.553789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.942 [2024-07-15 13:07:33.660478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.873 13:07:34 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:37.873 13:07:34 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.873 13:07:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:37.873 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.873 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7230c265-929e-433b-a6ff-fb8d3d963cd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7230c265-929e-433b-a6ff-fb8d3d963cd9",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4c850380-6b5d-41e5-92c7-dae3d7e8f2ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4c850380-6b5d-41e5-92c7-dae3d7e8f2ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9963195c-f5a2-41de-adbc-89cf364a3e55"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9963195c-f5a2-41de-adbc-89cf364a3e55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ccf261b1-ba79-4145-8561-48e26af59b01"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ccf261b1-ba79-4145-8561-48e26af59b01",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b986671f-412d-428f-b6cf-ade7e50d6f5a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b986671f-412d-428f-b6cf-ade7e50d6f5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bd34dfa7-df68-4b33-8d23-98a2a12875c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bd34dfa7-df68-4b33-8d23-98a2a12875c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:38.131 13:07:34 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77659 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77659 ']' 00:08:38.131 13:07:34 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77659 00:08:38.132 13:07:34 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:08:38.132 13:07:34 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:38.132 13:07:34 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77659 00:08:38.388 killing process with pid 77659 00:08:38.388 13:07:34 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:38.388 13:07:34 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:38.388 13:07:34 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77659' 00:08:38.388 13:07:34 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77659 00:08:38.388 13:07:34 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77659 00:08:38.645 13:07:35 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:38.645 13:07:35 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:38.645 13:07:35 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:38.645 13:07:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.645 13:07:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.645 ************************************ 00:08:38.645 START TEST bdev_hello_world 00:08:38.645 ************************************ 00:08:38.645 13:07:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:38.902 [2024-07-15 13:07:35.436565] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:38.902 [2024-07-15 13:07:35.437049] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77732 ] 00:08:38.902 [2024-07-15 13:07:35.585713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.160 [2024-07-15 13:07:35.684763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.417 [2024-07-15 13:07:36.089406] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:39.417 [2024-07-15 13:07:36.089477] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:39.417 [2024-07-15 13:07:36.089522] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:39.417 [2024-07-15 13:07:36.092138] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:39.417 [2024-07-15 13:07:36.093256] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:39.417 [2024-07-15 13:07:36.093306] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:39.417 [2024-07-15 13:07:36.093522] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:39.417 00:08:39.417 [2024-07-15 13:07:36.093567] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:39.673 ************************************ 00:08:39.673 END TEST bdev_hello_world 00:08:39.673 ************************************ 00:08:39.673 00:08:39.673 real 0m1.011s 00:08:39.673 user 0m0.672s 00:08:39.673 sys 0m0.232s 00:08:39.673 13:07:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.673 13:07:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:39.673 13:07:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:39.674 13:07:36 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:39.674 13:07:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.674 13:07:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.674 ************************************ 00:08:39.674 START TEST bdev_bounds 00:08:39.674 ************************************ 00:08:39.674 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:39.674 13:07:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77763 00:08:39.674 13:07:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.674 13:07:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:39.931 Process bdevio pid: 77763 00:08:39.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77763' 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77763 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77763 ']' 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:39.931 13:07:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:39.931 [2024-07-15 13:07:36.505846] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:39.931 [2024-07-15 13:07:36.506073] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77763 ] 00:08:39.931 [2024-07-15 13:07:36.652878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.188 [2024-07-15 13:07:36.751826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.188 [2024-07-15 13:07:36.751887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.188 [2024-07-15 13:07:36.751958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.753 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:40.753 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:40.753 13:07:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:41.012 I/O targets: 00:08:41.012 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:41.012 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:41.012 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.012 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.012 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.012 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:41.012 00:08:41.012 00:08:41.012 CUnit - A unit testing framework for C - Version 2.1-3 00:08:41.012 http://cunit.sourceforge.net/ 00:08:41.012 00:08:41.012 00:08:41.012 Suite: bdevio tests on: Nvme3n1 00:08:41.012 Test: blockdev write read block ...passed 00:08:41.012 Test: blockdev write zeroes read block ...passed 00:08:41.012 Test: blockdev write zeroes read no split ...passed 00:08:41.012 Test: blockdev write zeroes read split ...passed 00:08:41.012 Test: blockdev write zeroes read split partial ...passed 00:08:41.012 Test: blockdev reset ...[2024-07-15 13:07:37.615843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:41.012 passed 00:08:41.012 Test: blockdev write read 8 blocks ...[2024-07-15 13:07:37.618646] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.012 passed 00:08:41.012 Test: blockdev write read size > 128k ...passed 00:08:41.012 Test: blockdev write read invalid size ...passed 00:08:41.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.012 Test: blockdev write read max offset ...passed 00:08:41.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.012 Test: blockdev writev readv 8 blocks ...passed 00:08:41.012 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.012 Test: blockdev writev readv block ...passed 00:08:41.012 Test: blockdev writev readv size > 128k ...passed 00:08:41.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.012 Test: blockdev comparev and writev ...[2024-07-15 13:07:37.626613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea0e000 len:0x1000 00:08:41.012 [2024-07-15 13:07:37.626733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.012 passed 00:08:41.012 Test: blockdev nvme passthru rw ...passed 00:08:41.012 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.012 Test: blockdev nvme admin passthru ...[2024-07-15 13:07:37.627860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.012 [2024-07-15 13:07:37.627926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.012 passed 00:08:41.012 Test: blockdev copy ...passed 00:08:41.012 Suite: bdevio tests on: Nvme2n3 00:08:41.012 Test: blockdev write read block ...passed 00:08:41.012 Test: blockdev write zeroes read block ...passed 00:08:41.012 Test: blockdev write zeroes read no split ...passed 00:08:41.012 Test: blockdev write zeroes read split ...passed 00:08:41.012 Test: blockdev write zeroes read split partial ...passed 00:08:41.012 Test: blockdev reset ...[2024-07-15 13:07:37.656502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:41.012 [2024-07-15 13:07:37.659981] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.012 passed 00:08:41.012 Test: blockdev write read 8 blocks ...passed 00:08:41.012 Test: blockdev write read size > 128k ...passed 00:08:41.012 Test: blockdev write read invalid size ...passed 00:08:41.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.012 Test: blockdev write read max offset ...passed 00:08:41.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.012 Test: blockdev writev readv 8 blocks ...passed 00:08:41.012 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.012 Test: blockdev writev readv block ...passed 00:08:41.012 Test: blockdev writev readv size > 128k ...passed 00:08:41.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.013 Test: blockdev comparev and writev ...[2024-07-15 13:07:37.668413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea0a000 len:0x1000 00:08:41.013 [2024-07-15 13:07:37.668514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.013 passed 00:08:41.013 Test: blockdev nvme passthru rw ...passed 00:08:41.013 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.013 Test: blockdev nvme admin passthru ...[2024-07-15 13:07:37.669439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.013 [2024-07-15 13:07:37.669496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.013 passed 00:08:41.013 Test: blockdev copy ...passed 00:08:41.013 Suite: bdevio tests on: Nvme2n2 00:08:41.013 Test: blockdev write read block ...passed 00:08:41.013 Test: blockdev write zeroes read block ...passed 00:08:41.013 Test: blockdev write zeroes read no split ...passed 00:08:41.013 Test: blockdev write zeroes read split ...passed 00:08:41.013 Test: blockdev write zeroes read split partial ...passed 00:08:41.013 Test: blockdev reset ...[2024-07-15 13:07:37.695051] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:41.013 [2024-07-15 13:07:37.698214] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.013 passed 00:08:41.013 Test: blockdev write read 8 blocks ...passed 00:08:41.013 Test: blockdev write read size > 128k ...passed 00:08:41.013 Test: blockdev write read invalid size ...passed 00:08:41.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.013 Test: blockdev write read max offset ...passed 00:08:41.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.013 Test: blockdev writev readv 8 blocks ...passed 00:08:41.013 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.013 Test: blockdev writev readv block ...passed 00:08:41.013 Test: blockdev writev readv size > 128k ...passed 00:08:41.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.013 Test: blockdev comparev and writev ...[2024-07-15 13:07:37.707228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea06000 len:0x1000 00:08:41.013 [2024-07-15 13:07:37.707333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.013 passed 00:08:41.013 Test: blockdev nvme passthru rw ...passed 00:08:41.013 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:07:37.708223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.013 passed 00:08:41.013 Test: blockdev nvme admin passthru ...[2024-07-15 13:07:37.708273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.013 passed 00:08:41.013 Test: blockdev copy ...passed 00:08:41.013 Suite: bdevio tests on: Nvme2n1 00:08:41.013 Test: blockdev write read block ...passed 00:08:41.013 Test: blockdev write zeroes read block ...passed 00:08:41.013 Test: blockdev write zeroes read no split ...passed 00:08:41.013 Test: blockdev write zeroes read split ...passed 00:08:41.013 Test: blockdev write zeroes read split partial ...passed 00:08:41.013 Test: blockdev reset ...[2024-07-15 13:07:37.735055] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:41.013 [2024-07-15 13:07:37.738005] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.013 passed 00:08:41.013 Test: blockdev write read 8 blocks ...passed 00:08:41.013 Test: blockdev write read size > 128k ...passed 00:08:41.013 Test: blockdev write read invalid size ...passed 00:08:41.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.013 Test: blockdev write read max offset ...passed 00:08:41.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.013 Test: blockdev writev readv 8 blocks ...passed 00:08:41.013 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.013 Test: blockdev writev readv block ...passed 00:08:41.013 Test: blockdev writev readv size > 128k ...passed 00:08:41.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.013 Test: blockdev comparev and writev ...[2024-07-15 13:07:37.746638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea02000 len:0x1000 00:08:41.013 [2024-07-15 13:07:37.746733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.013 passed 00:08:41.013 Test: blockdev nvme passthru rw ...passed 00:08:41.013 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.013 Test: blockdev nvme admin passthru ...[2024-07-15 13:07:37.747653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.013 [2024-07-15 13:07:37.747712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.271 passed 00:08:41.271 Test: blockdev copy ...passed 00:08:41.271 Suite: bdevio tests on: Nvme1n1 00:08:41.271 Test: blockdev write read block ...passed 00:08:41.271 Test: blockdev write zeroes read block ...passed 00:08:41.271 Test: blockdev write zeroes read no split ...passed 00:08:41.271 Test: blockdev write zeroes read split ...passed 00:08:41.271 Test: blockdev write zeroes read split partial ...passed 00:08:41.271 Test: blockdev reset ...[2024-07-15 13:07:37.775153] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:41.271 [2024-07-15 13:07:37.777993] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.271 passed 00:08:41.271 Test: blockdev write read 8 blocks ...passed 00:08:41.271 Test: blockdev write read size > 128k ...passed 00:08:41.271 Test: blockdev write read invalid size ...passed 00:08:41.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.271 Test: blockdev write read max offset ...passed 00:08:41.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.271 Test: blockdev writev readv 8 blocks ...passed 00:08:41.271 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.271 Test: blockdev writev readv block ...passed 00:08:41.271 Test: blockdev writev readv size > 128k ...passed 00:08:41.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.271 Test: blockdev comparev and writev ...[2024-07-15 13:07:37.787881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:41.271 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ba80e000 len:0x1000 00:08:41.271 [2024-07-15 13:07:37.788118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.271 passed 00:08:41.271 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:07:37.789140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.271 [2024-07-15 13:07:37.789196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.271 passed 00:08:41.271 Test: blockdev nvme admin passthru ...passed 00:08:41.271 Test: blockdev copy ...passed 00:08:41.271 Suite: bdevio tests on: Nvme0n1 00:08:41.271 Test: blockdev write read block ...passed 00:08:41.271 Test: blockdev write zeroes read block ...passed 00:08:41.271 Test: blockdev write zeroes read no split ...passed 00:08:41.271 Test: blockdev write zeroes read split ...passed 00:08:41.271 Test: blockdev write zeroes read split partial ...passed 00:08:41.271 Test: blockdev reset ...[2024-07-15 13:07:37.814534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:41.271 [2024-07-15 13:07:37.817279] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.271 passed 00:08:41.271 Test: blockdev write read 8 blocks ...passed 00:08:41.271 Test: blockdev write read size > 128k ...passed 00:08:41.271 Test: blockdev write read invalid size ...passed 00:08:41.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.271 Test: blockdev write read max offset ...passed 00:08:41.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.271 Test: blockdev writev readv 8 blocks ...passed 00:08:41.271 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.271 Test: blockdev writev readv block ...passed 00:08:41.271 Test: blockdev writev readv size > 128k ...passed 00:08:41.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.271 Test: blockdev comparev and writev ...passed 00:08:41.271 Test: blockdev nvme passthru rw ...[2024-07-15 13:07:37.823001] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:41.271 separate metadata which is not supported yet. 00:08:41.271 passed 00:08:41.271 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.271 Test: blockdev nvme admin passthru ...[2024-07-15 13:07:37.823649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:41.271 [2024-07-15 13:07:37.823708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:41.271 passed 00:08:41.271 Test: blockdev copy ...passed 00:08:41.271 00:08:41.271 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.271 suites 6 6 n/a 0 0 00:08:41.271 tests 138 138 138 0 0 00:08:41.271 asserts 893 893 893 0 n/a 00:08:41.271 00:08:41.271 Elapsed time = 0.524 seconds 00:08:41.271 0 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77763 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77763 ']' 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77763 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77763 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77763' 00:08:41.271 killing process with pid 77763 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77763 00:08:41.271 13:07:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77763 00:08:41.530 13:07:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:41.530 00:08:41.530 real 0m1.690s 00:08:41.530 user 0m4.165s 00:08:41.530 sys 0m0.397s 00:08:41.530 ************************************ 00:08:41.530 END TEST bdev_bounds 00:08:41.530 13:07:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.530 13:07:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:41.530 ************************************ 00:08:41.530 13:07:38 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:41.530 13:07:38 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:41.530 13:07:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.530 13:07:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.530 ************************************ 00:08:41.530 START TEST bdev_nbd 00:08:41.530 ************************************ 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77817 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77817 /var/tmp/spdk-nbd.sock 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77817 ']' 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:41.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:41.530 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:41.787 [2024-07-15 13:07:38.269906] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:41.787 [2024-07-15 13:07:38.270189] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.787 [2024-07-15 13:07:38.417035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.787 [2024-07-15 13:07:38.512537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.719 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.719 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:42.719 13:07:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:42.720 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.977 1+0 records in 00:08:42.977 1+0 records out 00:08:42.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564309 s, 7.3 MB/s 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:42.977 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.234 1+0 records in 00:08:43.234 1+0 records out 00:08:43.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511192 s, 8.0 MB/s 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:43.234 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.494 1+0 records in 00:08:43.494 1+0 records out 00:08:43.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649951 s, 6.3 MB/s 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:43.494 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.752 1+0 records in 00:08:43.752 1+0 records out 00:08:43.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653145 s, 6.3 MB/s 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:43.752 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.010 1+0 records in 00:08:44.010 1+0 records out 00:08:44.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830465 s, 4.9 MB/s 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:44.010 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.268 1+0 records in 00:08:44.268 1+0 records out 00:08:44.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713678 s, 5.7 MB/s 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:44.268 13:07:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:44.269 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.269 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:44.269 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.527 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd0", 00:08:44.527 "bdev_name": "Nvme0n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd1", 00:08:44.527 "bdev_name": "Nvme1n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd2", 00:08:44.527 "bdev_name": "Nvme2n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd3", 00:08:44.527 "bdev_name": "Nvme2n2" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd4", 00:08:44.527 "bdev_name": "Nvme2n3" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd5", 00:08:44.527 "bdev_name": "Nvme3n1" 00:08:44.527 } 00:08:44.527 ]' 00:08:44.527 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:44.527 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd0", 00:08:44.527 "bdev_name": "Nvme0n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd1", 00:08:44.527 "bdev_name": "Nvme1n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd2", 00:08:44.527 "bdev_name": "Nvme2n1" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd3", 00:08:44.527 "bdev_name": "Nvme2n2" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd4", 00:08:44.527 "bdev_name": "Nvme2n3" 00:08:44.527 }, 00:08:44.527 { 00:08:44.527 "nbd_device": "/dev/nbd5", 00:08:44.527 "bdev_name": "Nvme3n1" 00:08:44.527 } 00:08:44.527 ]' 00:08:44.527 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.784 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.043 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.300 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.864 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:46.120 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:46.120 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:46.120 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:46.120 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.120 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.121 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:46.121 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.121 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.121 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.121 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:46.377 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.378 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.650 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:46.651 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:46.908 /dev/nbd0 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:46.908 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.909 1+0 records in 00:08:46.909 1+0 records out 00:08:46.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712843 s, 5.7 MB/s 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:46.909 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:47.166 /dev/nbd1 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.166 1+0 records in 00:08:47.166 1+0 records out 00:08:47.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080909 s, 5.1 MB/s 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:47.166 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:47.423 /dev/nbd10 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.424 1+0 records in 00:08:47.424 1+0 records out 00:08:47.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716278 s, 5.7 MB/s 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:47.424 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:47.988 /dev/nbd11 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.988 1+0 records in 00:08:47.988 1+0 records out 00:08:47.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106906 s, 3.8 MB/s 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:47.988 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:47.988 /dev/nbd12 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.245 1+0 records in 00:08:48.245 1+0 records out 00:08:48.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674589 s, 6.1 MB/s 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:48.245 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:48.503 /dev/nbd13 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.503 1+0 records in 00:08:48.503 1+0 records out 00:08:48.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058309 s, 7.0 MB/s 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.503 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.760 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.760 { 00:08:48.760 "nbd_device": "/dev/nbd0", 00:08:48.760 "bdev_name": "Nvme0n1" 00:08:48.760 }, 00:08:48.760 { 00:08:48.760 "nbd_device": "/dev/nbd1", 00:08:48.760 "bdev_name": "Nvme1n1" 00:08:48.760 }, 00:08:48.760 { 00:08:48.760 "nbd_device": "/dev/nbd10", 00:08:48.760 "bdev_name": "Nvme2n1" 00:08:48.760 }, 00:08:48.760 { 00:08:48.760 "nbd_device": "/dev/nbd11", 00:08:48.760 "bdev_name": "Nvme2n2" 00:08:48.760 }, 00:08:48.760 { 00:08:48.761 "nbd_device": "/dev/nbd12", 00:08:48.761 "bdev_name": "Nvme2n3" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd13", 00:08:48.761 "bdev_name": "Nvme3n1" 00:08:48.761 } 00:08:48.761 ]' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd0", 00:08:48.761 "bdev_name": "Nvme0n1" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd1", 00:08:48.761 "bdev_name": "Nvme1n1" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd10", 00:08:48.761 "bdev_name": "Nvme2n1" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd11", 00:08:48.761 "bdev_name": "Nvme2n2" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd12", 00:08:48.761 "bdev_name": "Nvme2n3" 00:08:48.761 }, 00:08:48.761 { 00:08:48.761 "nbd_device": "/dev/nbd13", 00:08:48.761 "bdev_name": "Nvme3n1" 00:08:48.761 } 00:08:48.761 ]' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.761 /dev/nbd1 00:08:48.761 /dev/nbd10 00:08:48.761 /dev/nbd11 00:08:48.761 /dev/nbd12 00:08:48.761 /dev/nbd13' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.761 /dev/nbd1 00:08:48.761 /dev/nbd10 00:08:48.761 /dev/nbd11 00:08:48.761 /dev/nbd12 00:08:48.761 /dev/nbd13' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:48.761 256+0 records in 00:08:48.761 256+0 records out 00:08:48.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00906869 s, 116 MB/s 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.761 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.018 256+0 records in 00:08:49.018 256+0 records out 00:08:49.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140592 s, 7.5 MB/s 00:08:49.018 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.019 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.019 256+0 records in 00:08:49.019 256+0 records out 00:08:49.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144603 s, 7.3 MB/s 00:08:49.019 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.019 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:49.276 256+0 records in 00:08:49.276 256+0 records out 00:08:49.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134782 s, 7.8 MB/s 00:08:49.276 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.277 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:49.277 256+0 records in 00:08:49.277 256+0 records out 00:08:49.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138627 s, 7.6 MB/s 00:08:49.277 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.277 13:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:49.535 256+0 records in 00:08:49.535 256+0 records out 00:08:49.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136912 s, 7.7 MB/s 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:49.535 256+0 records in 00:08:49.535 256+0 records out 00:08:49.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132117 s, 7.9 MB/s 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.535 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.793 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.050 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.308 13:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.565 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.823 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.389 13:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.389 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:51.953 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:52.236 malloc_lvol_verify 00:08:52.236 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:52.236 34adfcd2-89f7-4a4f-9b90-f7eec1d06d94 00:08:52.494 13:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:52.751 82a32241-9fee-438e-be09-7a00d58b6332 00:08:52.751 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:53.009 /dev/nbd0 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:53.009 mke2fs 1.46.5 (30-Dec-2021) 00:08:53.009 Discarding device blocks: 0/4096 done 00:08:53.009 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:53.009 00:08:53.009 Allocating group tables: 0/1 done 00:08:53.009 Writing inode tables: 0/1 done 00:08:53.009 Creating journal (1024 blocks): done 00:08:53.009 Writing superblocks and filesystem accounting information: 0/1 done 00:08:53.009 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.009 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:53.267 13:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77817 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77817 ']' 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77817 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77817 00:08:53.268 killing process with pid 77817 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77817' 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77817 00:08:53.268 13:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77817 00:08:53.525 13:07:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:53.525 00:08:53.525 real 0m12.049s 00:08:53.525 user 0m17.530s 00:08:53.525 sys 0m4.220s 00:08:53.525 13:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.525 13:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:53.525 ************************************ 00:08:53.525 END TEST bdev_nbd 00:08:53.525 ************************************ 00:08:53.525 13:07:50 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:53.525 13:07:50 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:53.525 skipping fio tests on NVMe due to multi-ns failures. 00:08:53.525 13:07:50 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:53.525 13:07:50 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:53.525 13:07:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.525 13:07:50 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:53.525 13:07:50 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.525 13:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.525 ************************************ 00:08:53.525 START TEST bdev_verify 00:08:53.525 ************************************ 00:08:53.525 13:07:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.796 [2024-07-15 13:07:50.365577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:53.796 [2024-07-15 13:07:50.365815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78213 ] 00:08:53.796 [2024-07-15 13:07:50.518504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.053 [2024-07-15 13:07:50.627264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.053 [2024-07-15 13:07:50.627264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.618 Running I/O for 5 seconds... 00:08:59.881 00:08:59.881 Latency(us) 00:08:59.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.881 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x0 length 0xbd0bd 00:08:59.881 Nvme0n1 : 5.07 1577.54 6.16 0.00 0.00 80738.46 11558.17 111053.73 00:08:59.881 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:59.881 Nvme0n1 : 5.07 1602.07 6.26 0.00 0.00 79500.59 10307.03 89605.59 00:08:59.881 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x0 length 0xa0000 00:08:59.881 Nvme1n1 : 5.07 1576.85 6.16 0.00 0.00 80626.03 11796.48 107240.73 00:08:59.881 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0xa0000 length 0xa0000 00:08:59.881 Nvme1n1 : 5.08 1600.88 6.25 0.00 0.00 79399.22 12273.11 87699.08 00:08:59.881 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x0 length 0x80000 00:08:59.881 Nvme2n1 : 5.09 1583.45 6.19 0.00 0.00 80390.66 13643.40 103904.35 00:08:59.881 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x80000 length 0x80000 00:08:59.881 Nvme2n1 : 5.09 1608.68 6.28 0.00 0.00 79089.80 11558.17 84839.33 00:08:59.881 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x0 length 0x80000 00:08:59.881 Nvme2n2 : 5.10 1582.48 6.18 0.00 0.00 80299.05 14715.81 99138.09 00:08:59.881 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x80000 length 0x80000 00:08:59.881 Nvme2n2 : 5.10 1607.77 6.28 0.00 0.00 78992.12 13047.62 82932.83 00:08:59.881 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x0 length 0x80000 00:08:59.881 Nvme2n3 : 5.10 1581.63 6.18 0.00 0.00 80151.65 14834.97 106764.10 00:08:59.881 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.881 Verification LBA range: start 0x80000 length 0x80000 00:08:59.881 Nvme2n3 : 5.10 1607.07 6.28 0.00 0.00 78894.04 12928.47 81502.95 00:08:59.882 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.882 Verification LBA range: start 0x0 length 0x20000 00:08:59.882 Nvme3n1 : 5.10 1580.78 6.17 0.00 0.00 80016.19 15252.01 110100.48 00:08:59.882 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.882 Verification LBA range: start 0x20000 length 0x20000 00:08:59.882 Nvme3n1 : 5.10 1606.55 6.28 0.00 0.00 78813.81 12273.11 88175.71 00:08:59.882 =================================================================================================================== 00:08:59.882 Total : 19115.76 74.67 0.00 0.00 79736.83 10307.03 111053.73 00:09:00.146 00:09:00.146 real 0m6.498s 00:09:00.146 user 0m11.881s 00:09:00.146 sys 0m0.297s 00:09:00.146 13:07:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:00.146 13:07:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:00.146 ************************************ 00:09:00.146 END TEST bdev_verify 00:09:00.146 ************************************ 00:09:00.146 13:07:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:00.146 13:07:56 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:00.146 13:07:56 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.146 13:07:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.146 ************************************ 00:09:00.146 START TEST bdev_verify_big_io 00:09:00.146 ************************************ 00:09:00.146 13:07:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:00.404 [2024-07-15 13:07:56.891993] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:00.404 [2024-07-15 13:07:56.892186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78300 ] 00:09:00.404 [2024-07-15 13:07:57.039358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:00.404 [2024-07-15 13:07:57.141649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.404 [2024-07-15 13:07:57.141658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.970 Running I/O for 5 seconds... 00:09:07.518 00:09:07.518 Latency(us) 00:09:07.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.518 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.518 Verification LBA range: start 0x0 length 0xbd0b 00:09:07.518 Nvme0n1 : 5.64 147.64 9.23 0.00 0.00 841999.40 19660.80 911307.87 00:09:07.518 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.518 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:07.518 Nvme0n1 : 5.73 127.30 7.96 0.00 0.00 951391.58 22758.87 1006632.96 00:09:07.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.518 Verification LBA range: start 0x0 length 0xa000 00:09:07.519 Nvme1n1 : 5.64 147.57 9.22 0.00 0.00 820503.27 45041.11 770226.73 00:09:07.519 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0xa000 length 0xa000 00:09:07.519 Nvme1n1 : 5.81 132.12 8.26 0.00 0.00 901120.70 77213.32 827421.79 00:09:07.519 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x0 length 0x8000 00:09:07.519 Nvme2n1 : 5.74 153.13 9.57 0.00 0.00 772612.44 51237.24 781665.75 00:09:07.519 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x8000 length 0x8000 00:09:07.519 Nvme2n1 : 5.74 124.37 7.77 0.00 0.00 928146.63 126782.37 1578583.51 00:09:07.519 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x0 length 0x8000 00:09:07.519 Nvme2n2 : 5.74 151.62 9.48 0.00 0.00 754164.43 50998.92 796917.76 00:09:07.519 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x8000 length 0x8000 00:09:07.519 Nvme2n2 : 5.84 136.26 8.52 0.00 0.00 834363.32 15966.95 1593835.52 00:09:07.519 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x0 length 0x8000 00:09:07.519 Nvme2n3 : 5.80 157.52 9.84 0.00 0.00 709043.98 53620.36 812169.77 00:09:07.519 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x8000 length 0x8000 00:09:07.519 Nvme2n3 : 5.86 139.78 8.74 0.00 0.00 785957.85 17515.99 1624339.55 00:09:07.519 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x0 length 0x2000 00:09:07.519 Nvme3n1 : 5.82 171.49 10.72 0.00 0.00 639756.80 3738.53 819795.78 00:09:07.519 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:07.519 Verification LBA range: start 0x2000 length 0x2000 00:09:07.519 Nvme3n1 : 5.93 175.94 11.00 0.00 0.00 613203.53 886.23 1654843.58 00:09:07.519 =================================================================================================================== 00:09:07.519 Total : 1764.73 110.30 0.00 0.00 784849.96 886.23 1654843.58 00:09:07.776 00:09:07.776 real 0m7.496s 00:09:07.776 user 0m13.846s 00:09:07.776 sys 0m0.305s 00:09:07.776 13:08:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.776 13:08:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:07.776 ************************************ 00:09:07.776 END TEST bdev_verify_big_io 00:09:07.776 ************************************ 00:09:07.776 13:08:04 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.776 13:08:04 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:07.776 13:08:04 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.776 13:08:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.776 ************************************ 00:09:07.776 START TEST bdev_write_zeroes 00:09:07.776 ************************************ 00:09:07.776 13:08:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.776 [2024-07-15 13:08:04.451076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:07.777 [2024-07-15 13:08:04.451335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78404 ] 00:09:08.034 [2024-07-15 13:08:04.599812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.034 [2024-07-15 13:08:04.706164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.622 Running I/O for 1 seconds... 00:09:09.551 00:09:09.551 Latency(us) 00:09:09.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.551 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme0n1 : 1.04 7048.81 27.53 0.00 0.00 17974.79 7745.16 85792.58 00:09:09.551 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme1n1 : 1.02 7314.16 28.57 0.00 0.00 17433.16 11975.21 51713.86 00:09:09.551 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme2n1 : 1.03 7300.87 28.52 0.00 0.00 17422.83 11498.59 50045.67 00:09:09.551 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme2n2 : 1.03 7289.32 28.47 0.00 0.00 17419.69 11498.59 49569.05 00:09:09.551 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme2n3 : 1.03 7277.31 28.43 0.00 0.00 17305.84 7923.90 49330.73 00:09:09.551 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.551 Nvme3n1 : 1.03 7265.86 28.38 0.00 0.00 17298.65 7566.43 48854.11 00:09:09.551 =================================================================================================================== 00:09:09.551 Total : 43496.34 169.91 0.00 0.00 17473.74 7566.43 85792.58 00:09:10.117 00:09:10.117 real 0m2.214s 00:09:10.117 user 0m1.840s 00:09:10.117 sys 0m0.254s 00:09:10.117 13:08:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.117 13:08:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 ************************************ 00:09:10.117 END TEST bdev_write_zeroes 00:09:10.117 ************************************ 00:09:10.117 13:08:06 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.117 13:08:06 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:10.117 13:08:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.117 13:08:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 ************************************ 00:09:10.117 START TEST bdev_json_nonenclosed 00:09:10.117 ************************************ 00:09:10.117 13:08:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.117 [2024-07-15 13:08:06.747499] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:10.117 [2024-07-15 13:08:06.747734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78446 ] 00:09:10.374 [2024-07-15 13:08:06.898921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.374 [2024-07-15 13:08:07.039162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.374 [2024-07-15 13:08:07.039351] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:10.374 [2024-07-15 13:08:07.039414] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:10.374 [2024-07-15 13:08:07.039455] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.631 00:09:10.631 real 0m0.583s 00:09:10.631 user 0m0.322s 00:09:10.631 sys 0m0.154s 00:09:10.631 13:08:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.631 ************************************ 00:09:10.631 END TEST bdev_json_nonenclosed 00:09:10.631 ************************************ 00:09:10.631 13:08:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:10.631 13:08:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.631 13:08:07 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:10.631 13:08:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.631 13:08:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.631 ************************************ 00:09:10.631 START TEST bdev_json_nonarray 00:09:10.631 ************************************ 00:09:10.631 13:08:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.631 [2024-07-15 13:08:07.349336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:10.631 [2024-07-15 13:08:07.349530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78470 ] 00:09:10.888 [2024-07-15 13:08:07.494114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.146 [2024-07-15 13:08:07.627934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.146 [2024-07-15 13:08:07.628088] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:11.146 [2024-07-15 13:08:07.628123] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:11.146 [2024-07-15 13:08:07.628141] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.146 00:09:11.146 real 0m0.566s 00:09:11.146 user 0m0.317s 00:09:11.146 sys 0m0.144s 00:09:11.146 13:08:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.146 13:08:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:11.146 ************************************ 00:09:11.146 END TEST bdev_json_nonarray 00:09:11.146 ************************************ 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:11.146 13:08:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:11.146 00:09:11.146 real 0m34.674s 00:09:11.146 user 0m52.898s 00:09:11.146 sys 0m6.909s 00:09:11.146 13:08:07 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.146 13:08:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.146 ************************************ 00:09:11.146 END TEST blockdev_nvme 00:09:11.146 ************************************ 00:09:11.404 13:08:07 -- spdk/autotest.sh@213 -- # uname -s 00:09:11.405 13:08:07 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:11.405 13:08:07 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:11.405 13:08:07 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:11.405 13:08:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.405 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:11.405 ************************************ 00:09:11.405 START TEST blockdev_nvme_gpt 00:09:11.405 ************************************ 00:09:11.405 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:11.405 * Looking for test storage... 00:09:11.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78542 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:11.405 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78542 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78542 ']' 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.405 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.663 [2024-07-15 13:08:08.148083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:11.663 [2024-07-15 13:08:08.148302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78542 ] 00:09:11.663 [2024-07-15 13:08:08.298467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.921 [2024-07-15 13:08:08.426970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.487 13:08:09 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.487 13:08:09 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:09:12.487 13:08:09 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:12.487 13:08:09 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:12.487 13:08:09 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.031 Waiting for block devices as requested 00:09:13.031 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.031 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.289 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.289 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.555 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.555 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:09:18.555 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:18.556 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:18.556 BYT; 00:09:18.556 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:18.556 BYT; 00:09:18.556 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:18.556 13:08:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:18.556 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:19.491 The operation has completed successfully. 00:09:19.491 13:08:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:20.427 The operation has completed successfully. 00:09:20.427 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.560 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.560 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.560 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.560 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:21.818 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.818 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:21.818 [] 00:09:21.818 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.818 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:21.818 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.818 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:22.141 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.141 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.400 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.400 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:22.400 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:22.401 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c42bda0d-3842-4451-aa1c-256a7ab7757d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c42bda0d-3842-4451-aa1c-256a7ab7757d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c2c93fa9-d944-4b35-a2dc-677bef592fce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2c93fa9-d944-4b35-a2dc-677bef592fce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "41175297-3daa-4908-844f-5456947958bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "41175297-3daa-4908-844f-5456947958bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c9f1df50-2d44-47bf-9857-f532b4bdc320"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c9f1df50-2d44-47bf-9857-f532b4bdc320",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1f4c0b7c-4cdc-45c2-b1a8-7d39b836ea70"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1f4c0b7c-4cdc-45c2-b1a8-7d39b836ea70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:22.401 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:22.401 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:22.401 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:22.401 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78542 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78542 ']' 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78542 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78542 00:09:22.401 killing process with pid 78542 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78542' 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78542 00:09:22.401 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78542 00:09:22.968 13:08:19 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:22.968 13:08:19 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:22.968 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:22.968 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.968 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 ************************************ 00:09:22.968 START TEST bdev_hello_world 00:09:22.968 ************************************ 00:09:22.968 13:08:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:22.968 [2024-07-15 13:08:19.551587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:22.968 [2024-07-15 13:08:19.551749] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79156 ] 00:09:22.968 [2024-07-15 13:08:19.696012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.232 [2024-07-15 13:08:19.797074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.490 [2024-07-15 13:08:20.217805] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:23.490 [2024-07-15 13:08:20.217908] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:23.490 [2024-07-15 13:08:20.217962] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:23.490 [2024-07-15 13:08:20.220776] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:23.490 [2024-07-15 13:08:20.221500] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:23.490 [2024-07-15 13:08:20.221556] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:23.490 [2024-07-15 13:08:20.221816] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:23.490 00:09:23.490 [2024-07-15 13:08:20.221889] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:24.056 00:09:24.056 real 0m1.046s 00:09:24.056 user 0m0.687s 00:09:24.056 sys 0m0.252s 00:09:24.056 13:08:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.056 13:08:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:24.056 ************************************ 00:09:24.056 END TEST bdev_hello_world 00:09:24.056 ************************************ 00:09:24.056 13:08:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:24.056 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:24.056 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.056 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.056 ************************************ 00:09:24.056 START TEST bdev_bounds 00:09:24.056 ************************************ 00:09:24.056 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:09:24.056 Process bdevio pid: 79187 00:09:24.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79187 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79187' 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79187 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 79187 ']' 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:24.057 13:08:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 [2024-07-15 13:08:20.670546] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:24.057 [2024-07-15 13:08:20.670950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79187 ] 00:09:24.315 [2024-07-15 13:08:20.824416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.315 [2024-07-15 13:08:20.931417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.315 [2024-07-15 13:08:20.931449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.315 [2024-07-15 13:08:20.931476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.249 13:08:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:25.249 13:08:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:09:25.249 13:08:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:25.249 I/O targets: 00:09:25.249 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:25.249 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:25.249 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:25.249 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:25.249 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:25.249 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:25.249 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:25.249 00:09:25.249 00:09:25.249 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.249 http://cunit.sourceforge.net/ 00:09:25.249 00:09:25.249 00:09:25.249 Suite: bdevio tests on: Nvme3n1 00:09:25.249 Test: blockdev write read block ...passed 00:09:25.249 Test: blockdev write zeroes read block ...passed 00:09:25.249 Test: blockdev write zeroes read no split ...passed 00:09:25.249 Test: blockdev write zeroes read split ...passed 00:09:25.249 Test: blockdev write zeroes read split partial ...passed 00:09:25.249 Test: blockdev reset ...[2024-07-15 13:08:21.832552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:25.249 passed 00:09:25.249 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.834730] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.249 passed 00:09:25.249 Test: blockdev write read size > 128k ...passed 00:09:25.249 Test: blockdev write read invalid size ...passed 00:09:25.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.249 Test: blockdev write read max offset ...passed 00:09:25.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.249 Test: blockdev writev readv 8 blocks ...passed 00:09:25.249 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.249 Test: blockdev writev readv block ...passed 00:09:25.249 Test: blockdev writev readv size > 128k ...passed 00:09:25.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.249 Test: blockdev comparev and writev ...[2024-07-15 13:08:21.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be604000 len:0x1000 00:09:25.249 [2024-07-15 13:08:21.841642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:25.249 passed 00:09:25.249 Test: blockdev nvme passthru rw ...passed 00:09:25.249 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.250 Test: blockdev nvme admin passthru ...[2024-07-15 13:08:21.842458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:25.250 [2024-07-15 13:08:21.842508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev copy ...passed 00:09:25.250 Suite: bdevio tests on: Nvme2n3 00:09:25.250 Test: blockdev write read block ...passed 00:09:25.250 Test: blockdev write zeroes read block ...passed 00:09:25.250 Test: blockdev write zeroes read no split ...passed 00:09:25.250 Test: blockdev write zeroes read split ...passed 00:09:25.250 Test: blockdev write zeroes read split partial ...passed 00:09:25.250 Test: blockdev reset ...[2024-07-15 13:08:21.856924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:25.250 passed 00:09:25.250 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.859447] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.250 passed 00:09:25.250 Test: blockdev write read size > 128k ...passed 00:09:25.250 Test: blockdev write read invalid size ...passed 00:09:25.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.250 Test: blockdev write read max offset ...passed 00:09:25.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.250 Test: blockdev writev readv 8 blocks ...passed 00:09:25.250 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.250 Test: blockdev writev readv block ...passed 00:09:25.250 Test: blockdev writev readv size > 128k ...passed 00:09:25.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.250 Test: blockdev comparev and writev ...[2024-07-15 13:08:21.865811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be604000 len:0x1000 00:09:25.250 [2024-07-15 13:08:21.865891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev nvme passthru rw ...passed 00:09:25.250 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.250 Test: blockdev nvme admin passthru ...[2024-07-15 13:08:21.866670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:25.250 [2024-07-15 13:08:21.866721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev copy ...passed 00:09:25.250 Suite: bdevio tests on: Nvme2n2 00:09:25.250 Test: blockdev write read block ...passed 00:09:25.250 Test: blockdev write zeroes read block ...passed 00:09:25.250 Test: blockdev write zeroes read no split ...passed 00:09:25.250 Test: blockdev write zeroes read split ...passed 00:09:25.250 Test: blockdev write zeroes read split partial ...passed 00:09:25.250 Test: blockdev reset ...[2024-07-15 13:08:21.880186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:25.250 passed 00:09:25.250 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.882490] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.250 passed 00:09:25.250 Test: blockdev write read size > 128k ...passed 00:09:25.250 Test: blockdev write read invalid size ...passed 00:09:25.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.250 Test: blockdev write read max offset ...passed 00:09:25.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.250 Test: blockdev writev readv 8 blocks ...passed 00:09:25.250 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.250 Test: blockdev writev readv block ...passed 00:09:25.250 Test: blockdev writev readv size > 128k ...passed 00:09:25.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.250 Test: blockdev comparev and writev ...[2024-07-15 13:08:21.889196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1422000 len:0x1000 00:09:25.250 [2024-07-15 13:08:21.889250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev nvme passthru rw ...passed 00:09:25.250 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.250 Test: blockdev nvme admin passthru ...[2024-07-15 13:08:21.890046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:25.250 [2024-07-15 13:08:21.890095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev copy ...passed 00:09:25.250 Suite: bdevio tests on: Nvme2n1 00:09:25.250 Test: blockdev write read block ...passed 00:09:25.250 Test: blockdev write zeroes read block ...passed 00:09:25.250 Test: blockdev write zeroes read no split ...passed 00:09:25.250 Test: blockdev write zeroes read split ...passed 00:09:25.250 Test: blockdev write zeroes read split partial ...passed 00:09:25.250 Test: blockdev reset ...[2024-07-15 13:08:21.903312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:25.250 passed 00:09:25.250 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.905664] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.250 passed 00:09:25.250 Test: blockdev write read size > 128k ...passed 00:09:25.250 Test: blockdev write read invalid size ...passed 00:09:25.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.250 Test: blockdev write read max offset ...passed 00:09:25.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.250 Test: blockdev writev readv 8 blocks ...passed 00:09:25.250 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.250 Test: blockdev writev readv block ...passed 00:09:25.250 Test: blockdev writev readv size > 128k ...passed 00:09:25.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.250 Test: blockdev comparev and writev ...[2024-07-15 13:08:21.912081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be60d000 len:0x1000 00:09:25.250 [2024-07-15 13:08:21.912138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev nvme passthru rw ...passed 00:09:25.250 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.250 Test: blockdev nvme admin passthru ...[2024-07-15 13:08:21.912993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:25.250 [2024-07-15 13:08:21.913040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev copy ...passed 00:09:25.250 Suite: bdevio tests on: Nvme1n1 00:09:25.250 Test: blockdev write read block ...passed 00:09:25.250 Test: blockdev write zeroes read block ...passed 00:09:25.250 Test: blockdev write zeroes read no split ...passed 00:09:25.250 Test: blockdev write zeroes read split ...passed 00:09:25.250 Test: blockdev write zeroes read split partial ...passed 00:09:25.250 Test: blockdev reset ...[2024-07-15 13:08:21.926592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:25.250 [2024-07-15 13:08:21.928709] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.250 passed 00:09:25.250 Test: blockdev write read 8 blocks ...passed 00:09:25.250 Test: blockdev write read size > 128k ...passed 00:09:25.250 Test: blockdev write read invalid size ...passed 00:09:25.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.250 Test: blockdev write read max offset ...passed 00:09:25.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.250 Test: blockdev writev readv 8 blocks ...passed 00:09:25.250 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.250 Test: blockdev writev readv block ...passed 00:09:25.250 Test: blockdev writev readv size > 128k ...passed 00:09:25.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.250 Test: blockdev comparev and writev ...[2024-07-15 13:08:21.935501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be232000 len:0x1000 00:09:25.250 [2024-07-15 13:08:21.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev nvme passthru rw ...passed 00:09:25.250 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.250 Test: blockdev nvme admin passthru ...[2024-07-15 13:08:21.936320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:25.250 [2024-07-15 13:08:21.936368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:25.250 passed 00:09:25.250 Test: blockdev copy ...passed 00:09:25.250 Suite: bdevio tests on: Nvme0n1p2 00:09:25.250 Test: blockdev write read block ...passed 00:09:25.250 Test: blockdev write zeroes read block ...passed 00:09:25.250 Test: blockdev write zeroes read no split ...passed 00:09:25.250 Test: blockdev write zeroes read split ...passed 00:09:25.250 Test: blockdev write zeroes read split partial ...passed 00:09:25.250 Test: blockdev reset ...[2024-07-15 13:08:21.952140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:25.250 passed 00:09:25.250 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.954462] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.250 passed 00:09:25.250 Test: blockdev write read size > 128k ...passed 00:09:25.250 Test: blockdev write read invalid size ...passed 00:09:25.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.251 Test: blockdev write read max offset ...passed 00:09:25.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.251 Test: blockdev writev readv 8 blocks ...passed 00:09:25.251 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.251 Test: blockdev writev readv block ...passed 00:09:25.251 Test: blockdev writev readv size > 128k ...passed 00:09:25.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.251 Test: blockdev comparev and writev ...passed 00:09:25.251 Test: blockdev nvme passthru rw ...passed 00:09:25.251 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.251 Test: blockdev nvme admin passthru ...passed 00:09:25.251 Test: blockdev copy ...[2024-07-15 13:08:21.960174] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:25.251 separate metadata which is not supported yet. 00:09:25.251 passed 00:09:25.251 Suite: bdevio tests on: Nvme0n1p1 00:09:25.251 Test: blockdev write read block ...passed 00:09:25.251 Test: blockdev write zeroes read block ...passed 00:09:25.251 Test: blockdev write zeroes read no split ...passed 00:09:25.251 Test: blockdev write zeroes read split ...passed 00:09:25.251 Test: blockdev write zeroes read split partial ...passed 00:09:25.251 Test: blockdev reset ...[2024-07-15 13:08:21.973763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:25.251 passed 00:09:25.251 Test: blockdev write read 8 blocks ...[2024-07-15 13:08:21.976086] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.251 passed 00:09:25.251 Test: blockdev write read size > 128k ...passed 00:09:25.251 Test: blockdev write read invalid size ...passed 00:09:25.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.251 Test: blockdev write read max offset ...passed 00:09:25.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.251 Test: blockdev writev readv 8 blocks ...passed 00:09:25.251 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.251 Test: blockdev writev readv block ...passed 00:09:25.251 Test: blockdev writev readv size > 128k ...passed 00:09:25.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.251 Test: blockdev comparev and writev ...passed 00:09:25.251 Test: blockdev nvme passthru rw ...passed 00:09:25.251 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.251 Test: blockdev nvme admin passthru ...passed 00:09:25.251 Test: blockdev copy ...[2024-07-15 13:08:21.981829] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:25.251 separate metadata which is not supported yet. 00:09:25.251 passed 00:09:25.251 00:09:25.251 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.251 suites 7 7 n/a 0 0 00:09:25.251 tests 161 161 161 0 0 00:09:25.251 asserts 1006 1006 1006 0 n/a 00:09:25.251 00:09:25.251 Elapsed time = 0.384 seconds 00:09:25.251 0 00:09:25.509 13:08:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79187 00:09:25.509 13:08:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 79187 ']' 00:09:25.509 13:08:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 79187 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79187 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79187' 00:09:25.509 killing process with pid 79187 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 79187 00:09:25.509 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 79187 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:25.766 00:09:25.766 real 0m1.696s 00:09:25.766 user 0m4.306s 00:09:25.766 sys 0m0.368s 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:25.766 ************************************ 00:09:25.766 END TEST bdev_bounds 00:09:25.766 ************************************ 00:09:25.766 13:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.766 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:25.766 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.766 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:25.766 ************************************ 00:09:25.766 START TEST bdev_nbd 00:09:25.766 ************************************ 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:25.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79241 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79241 /var/tmp/spdk-nbd.sock 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 79241 ']' 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:25.766 13:08:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:25.766 [2024-07-15 13:08:22.422453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:25.766 [2024-07-15 13:08:22.422653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.024 [2024-07-15 13:08:22.575611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.024 [2024-07-15 13:08:22.682542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:26.957 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.215 1+0 records in 00:09:27.215 1+0 records out 00:09:27.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067537 s, 6.1 MB/s 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.215 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.472 1+0 records in 00:09:27.472 1+0 records out 00:09:27.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508121 s, 8.1 MB/s 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.472 13:08:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.472 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.730 1+0 records in 00:09:27.730 1+0 records out 00:09:27.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500981 s, 8.2 MB/s 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.730 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:27.987 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.988 1+0 records in 00:09:27.988 1+0 records out 00:09:27.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527532 s, 7.8 MB/s 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.988 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.246 1+0 records in 00:09:28.246 1+0 records out 00:09:28.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595177 s, 6.9 MB/s 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.246 13:08:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.503 1+0 records in 00:09:28.503 1+0 records out 00:09:28.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703947 s, 5.8 MB/s 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.503 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:28.760 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:28.760 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.761 1+0 records in 00:09:28.761 1+0 records out 00:09:28.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776106 s, 5.3 MB/s 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.761 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd0", 00:09:29.326 "bdev_name": "Nvme0n1p1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd1", 00:09:29.326 "bdev_name": "Nvme0n1p2" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd2", 00:09:29.326 "bdev_name": "Nvme1n1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd3", 00:09:29.326 "bdev_name": "Nvme2n1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd4", 00:09:29.326 "bdev_name": "Nvme2n2" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd5", 00:09:29.326 "bdev_name": "Nvme2n3" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd6", 00:09:29.326 "bdev_name": "Nvme3n1" 00:09:29.326 } 00:09:29.326 ]' 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd0", 00:09:29.326 "bdev_name": "Nvme0n1p1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd1", 00:09:29.326 "bdev_name": "Nvme0n1p2" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd2", 00:09:29.326 "bdev_name": "Nvme1n1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd3", 00:09:29.326 "bdev_name": "Nvme2n1" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd4", 00:09:29.326 "bdev_name": "Nvme2n2" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd5", 00:09:29.326 "bdev_name": "Nvme2n3" 00:09:29.326 }, 00:09:29.326 { 00:09:29.326 "nbd_device": "/dev/nbd6", 00:09:29.326 "bdev_name": "Nvme3n1" 00:09:29.326 } 00:09:29.326 ]' 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.326 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.585 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.842 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.137 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.395 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.653 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.911 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.476 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:31.734 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:31.992 /dev/nbd0 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.992 1+0 records in 00:09:31.992 1+0 records out 00:09:31.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565547 s, 7.2 MB/s 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:31.992 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:32.250 /dev/nbd1 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.250 1+0 records in 00:09:32.250 1+0 records out 00:09:32.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793321 s, 5.2 MB/s 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.250 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:32.508 /dev/nbd10 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.508 1+0 records in 00:09:32.508 1+0 records out 00:09:32.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743444 s, 5.5 MB/s 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.508 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:32.766 /dev/nbd11 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.766 1+0 records in 00:09:32.766 1+0 records out 00:09:32.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748566 s, 5.5 MB/s 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.766 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:33.024 /dev/nbd12 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.024 1+0 records in 00:09:33.024 1+0 records out 00:09:33.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737986 s, 5.6 MB/s 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:33.024 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:33.282 /dev/nbd13 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.540 1+0 records in 00:09:33.540 1+0 records out 00:09:33.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00182874 s, 2.2 MB/s 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:33.540 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:33.805 /dev/nbd14 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.805 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.806 1+0 records in 00:09:33.806 1+0 records out 00:09:33.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565112 s, 7.2 MB/s 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.806 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd0", 00:09:34.065 "bdev_name": "Nvme0n1p1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd1", 00:09:34.065 "bdev_name": "Nvme0n1p2" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd10", 00:09:34.065 "bdev_name": "Nvme1n1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd11", 00:09:34.065 "bdev_name": "Nvme2n1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd12", 00:09:34.065 "bdev_name": "Nvme2n2" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd13", 00:09:34.065 "bdev_name": "Nvme2n3" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd14", 00:09:34.065 "bdev_name": "Nvme3n1" 00:09:34.065 } 00:09:34.065 ]' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd0", 00:09:34.065 "bdev_name": "Nvme0n1p1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd1", 00:09:34.065 "bdev_name": "Nvme0n1p2" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd10", 00:09:34.065 "bdev_name": "Nvme1n1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd11", 00:09:34.065 "bdev_name": "Nvme2n1" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd12", 00:09:34.065 "bdev_name": "Nvme2n2" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd13", 00:09:34.065 "bdev_name": "Nvme2n3" 00:09:34.065 }, 00:09:34.065 { 00:09:34.065 "nbd_device": "/dev/nbd14", 00:09:34.065 "bdev_name": "Nvme3n1" 00:09:34.065 } 00:09:34.065 ]' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:34.065 /dev/nbd1 00:09:34.065 /dev/nbd10 00:09:34.065 /dev/nbd11 00:09:34.065 /dev/nbd12 00:09:34.065 /dev/nbd13 00:09:34.065 /dev/nbd14' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:34.065 /dev/nbd1 00:09:34.065 /dev/nbd10 00:09:34.065 /dev/nbd11 00:09:34.065 /dev/nbd12 00:09:34.065 /dev/nbd13 00:09:34.065 /dev/nbd14' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:34.065 256+0 records in 00:09:34.065 256+0 records out 00:09:34.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00761835 s, 138 MB/s 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.065 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:34.324 256+0 records in 00:09:34.324 256+0 records out 00:09:34.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174811 s, 6.0 MB/s 00:09:34.324 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.324 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:34.582 256+0 records in 00:09:34.582 256+0 records out 00:09:34.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174406 s, 6.0 MB/s 00:09:34.582 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.582 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:34.582 256+0 records in 00:09:34.582 256+0 records out 00:09:34.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171949 s, 6.1 MB/s 00:09:34.582 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.582 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:34.840 256+0 records in 00:09:34.840 256+0 records out 00:09:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153277 s, 6.8 MB/s 00:09:34.840 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.840 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:34.840 256+0 records in 00:09:34.840 256+0 records out 00:09:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159268 s, 6.6 MB/s 00:09:34.840 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.840 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:35.168 256+0 records in 00:09:35.168 256+0 records out 00:09:35.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157678 s, 6.7 MB/s 00:09:35.168 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:35.168 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:35.427 256+0 records in 00:09:35.428 256+0 records out 00:09:35.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176502 s, 5.9 MB/s 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.428 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.687 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.079 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.363 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.363 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.656 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.914 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.230 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.510 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:37.510 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:37.511 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:37.889 malloc_lvol_verify 00:09:37.889 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:38.158 8d077014-99f7-4295-b4b9-158643fab1ce 00:09:38.158 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:38.417 028eabfc-0c29-44ad-b621-2ef4ad43ae9d 00:09:38.417 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:38.676 /dev/nbd0 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:38.676 mke2fs 1.46.5 (30-Dec-2021) 00:09:38.676 Discarding device blocks: 0/4096 done 00:09:38.676 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:38.676 00:09:38.676 Allocating group tables: 0/1 done 00:09:38.676 Writing inode tables: 0/1 done 00:09:38.676 Creating journal (1024 blocks): done 00:09:38.676 Writing superblocks and filesystem accounting information: 0/1 done 00:09:38.676 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.676 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.934 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79241 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 79241 ']' 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 79241 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79241 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79241' 00:09:39.193 killing process with pid 79241 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 79241 00:09:39.193 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 79241 00:09:39.452 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:39.452 00:09:39.452 real 0m13.677s 00:09:39.452 user 0m19.835s 00:09:39.452 sys 0m4.811s 00:09:39.452 ************************************ 00:09:39.452 END TEST bdev_nbd 00:09:39.452 ************************************ 00:09:39.452 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.452 13:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:39.452 skipping fio tests on NVMe due to multi-ns failures. 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:39.452 13:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:39.452 13:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:39.452 13:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.452 13:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:39.452 ************************************ 00:09:39.452 START TEST bdev_verify 00:09:39.452 ************************************ 00:09:39.452 13:08:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:39.452 [2024-07-15 13:08:36.141217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:39.452 [2024-07-15 13:08:36.141447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79683 ] 00:09:39.711 [2024-07-15 13:08:36.292319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.711 [2024-07-15 13:08:36.388682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.711 [2024-07-15 13:08:36.388722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.277 Running I/O for 5 seconds... 00:09:45.539 00:09:45.539 Latency(us) 00:09:45.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.539 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x5e800 00:09:45.539 Nvme0n1p1 : 5.07 1312.29 5.13 0.00 0.00 97310.47 19422.49 89605.59 00:09:45.539 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x5e800 length 0x5e800 00:09:45.539 Nvme0n1p1 : 5.08 1258.63 4.92 0.00 0.00 101480.24 13464.67 113436.86 00:09:45.539 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x5e7ff 00:09:45.539 Nvme0n1p2 : 5.07 1311.81 5.12 0.00 0.00 97180.42 19541.64 87222.46 00:09:45.539 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:45.539 Nvme0n1p2 : 5.09 1258.20 4.91 0.00 0.00 101238.93 13285.93 109623.85 00:09:45.539 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0xa0000 00:09:45.539 Nvme1n1 : 5.08 1311.36 5.12 0.00 0.00 97014.14 19422.49 82932.83 00:09:45.539 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0xa0000 length 0xa0000 00:09:45.539 Nvme1n1 : 5.09 1257.84 4.91 0.00 0.00 101067.21 13524.25 104857.60 00:09:45.539 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x80000 00:09:45.539 Nvme2n1 : 5.08 1310.95 5.12 0.00 0.00 96863.65 19541.64 78166.57 00:09:45.539 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x80000 length 0x80000 00:09:45.539 Nvme2n1 : 5.09 1257.46 4.91 0.00 0.00 100887.03 13762.56 102474.47 00:09:45.539 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x80000 00:09:45.539 Nvme2n2 : 5.08 1310.55 5.12 0.00 0.00 96698.97 19184.17 81026.33 00:09:45.539 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x80000 length 0x80000 00:09:45.539 Nvme2n2 : 5.09 1257.10 4.91 0.00 0.00 100704.24 14000.87 107240.73 00:09:45.539 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x80000 00:09:45.539 Nvme2n3 : 5.08 1310.12 5.12 0.00 0.00 96541.14 17992.61 84362.71 00:09:45.539 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x80000 length 0x80000 00:09:45.539 Nvme2n3 : 5.09 1256.73 4.91 0.00 0.00 100527.01 14179.61 112483.61 00:09:45.539 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x0 length 0x20000 00:09:45.539 Nvme3n1 : 5.08 1309.68 5.12 0.00 0.00 96374.29 13345.51 88652.33 00:09:45.539 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.539 Verification LBA range: start 0x20000 length 0x20000 00:09:45.539 Nvme3n1 : 5.09 1256.37 4.91 0.00 0.00 100371.77 12571.00 114390.11 00:09:45.539 =================================================================================================================== 00:09:45.539 Total : 17979.08 70.23 0.00 0.00 98836.05 12571.00 114390.11 00:09:45.797 00:09:45.798 real 0m6.385s 00:09:45.798 user 0m11.797s 00:09:45.798 sys 0m0.288s 00:09:45.798 13:08:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:45.798 13:08:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 ************************************ 00:09:45.798 END TEST bdev_verify 00:09:45.798 ************************************ 00:09:45.798 13:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:45.798 13:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:45.798 13:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:45.798 13:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 ************************************ 00:09:45.798 START TEST bdev_verify_big_io 00:09:45.798 ************************************ 00:09:45.798 13:08:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:46.056 [2024-07-15 13:08:42.567075] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:46.056 [2024-07-15 13:08:42.567248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79770 ] 00:09:46.056 [2024-07-15 13:08:42.713003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.313 [2024-07-15 13:08:42.808181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.313 [2024-07-15 13:08:42.808258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.571 Running I/O for 5 seconds... 00:09:53.122 00:09:53.122 Latency(us) 00:09:53.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.122 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0x5e80 00:09:53.122 Nvme0n1p1 : 5.92 102.40 6.40 0.00 0.00 1205854.14 33602.09 1349803.29 00:09:53.122 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x5e80 length 0x5e80 00:09:53.122 Nvme0n1p1 : 5.94 99.54 6.22 0.00 0.00 1233663.33 20852.36 1235413.18 00:09:53.122 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0x5e7f 00:09:53.122 Nvme0n1p2 : 5.92 102.70 6.42 0.00 0.00 1163581.10 102951.10 1182031.13 00:09:53.122 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:53.122 Nvme0n1p2 : 5.94 98.49 6.16 0.00 0.00 1207660.54 103427.72 1197283.14 00:09:53.122 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0xa000 00:09:53.122 Nvme1n1 : 6.00 107.42 6.71 0.00 0.00 1081884.36 64821.06 1189657.13 00:09:53.122 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0xa000 length 0xa000 00:09:53.122 Nvme1n1 : 5.94 102.84 6.43 0.00 0.00 1119424.31 107240.73 1052389.00 00:09:53.122 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0x8000 00:09:53.122 Nvme2n1 : 6.01 107.40 6.71 0.00 0.00 1047387.39 64821.06 1204909.15 00:09:53.122 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x8000 length 0x8000 00:09:53.122 Nvme2n1 : 5.94 107.68 6.73 0.00 0.00 1052084.04 74830.20 1060015.01 00:09:53.122 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0x8000 00:09:53.122 Nvme2n2 : 6.01 111.34 6.96 0.00 0.00 987603.21 65297.69 1220161.16 00:09:53.122 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x8000 length 0x8000 00:09:53.122 Nvme2n2 : 6.01 110.29 6.89 0.00 0.00 992282.08 64821.06 1067641.02 00:09:53.122 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x0 length 0x8000 00:09:53.122 Nvme2n3 : 6.11 120.93 7.56 0.00 0.00 886090.09 47185.92 1227787.17 00:09:53.122 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.122 Verification LBA range: start 0x8000 length 0x8000 00:09:53.123 Nvme2n3 : 6.10 112.39 7.02 0.00 0.00 947781.18 45994.36 2257298.15 00:09:53.123 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:53.123 Verification LBA range: start 0x0 length 0x2000 00:09:53.123 Nvme3n1 : 6.12 130.60 8.16 0.00 0.00 798899.35 2978.91 1250665.19 00:09:53.123 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:53.123 Verification LBA range: start 0x2000 length 0x2000 00:09:53.123 Nvme3n1 : 6.13 128.25 8.02 0.00 0.00 807958.09 2055.45 2089525.99 00:09:53.123 =================================================================================================================== 00:09:53.123 Total : 1542.28 96.39 0.00 0.00 1024633.60 2055.45 2257298.15 00:09:53.695 00:09:53.695 real 0m7.642s 00:09:53.695 user 0m14.223s 00:09:53.695 sys 0m0.320s 00:09:53.695 13:08:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.695 ************************************ 00:09:53.695 END TEST bdev_verify_big_io 00:09:53.695 ************************************ 00:09:53.695 13:08:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:53.695 13:08:50 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:53.695 13:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:53.695 13:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.695 13:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:53.695 ************************************ 00:09:53.695 START TEST bdev_write_zeroes 00:09:53.695 ************************************ 00:09:53.695 13:08:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:53.695 [2024-07-15 13:08:50.277612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:53.695 [2024-07-15 13:08:50.277815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79874 ] 00:09:53.695 [2024-07-15 13:08:50.429381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.954 [2024-07-15 13:08:50.529369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.519 Running I/O for 1 seconds... 00:09:55.454 00:09:55.454 Latency(us) 00:09:55.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.454 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme0n1p1 : 1.09 829.79 3.24 0.00 0.00 151461.40 9830.40 406084.89 00:09:55.454 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme0n1p2 : 1.09 734.46 2.87 0.00 0.00 171533.70 24188.74 396552.38 00:09:55.454 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme1n1 : 1.06 1328.74 5.19 0.00 0.00 95734.78 18111.77 352702.84 00:09:55.454 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme2n1 : 1.06 1205.90 4.71 0.00 0.00 105307.88 17992.61 354609.34 00:09:55.454 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme2n2 : 1.06 1203.91 4.70 0.00 0.00 105297.97 18945.86 356515.84 00:09:55.454 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme2n3 : 1.06 1201.90 4.69 0.00 0.00 105294.01 21924.77 356515.84 00:09:55.454 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:55.454 Nvme3n1 : 1.07 1199.91 4.69 0.00 0.00 105321.89 20852.36 356515.84 00:09:55.454 =================================================================================================================== 00:09:55.454 Total : 7704.63 30.10 0.00 0.00 115168.87 9830.40 406084.89 00:09:55.712 00:09:55.712 real 0m2.248s 00:09:55.712 user 0m1.846s 00:09:55.712 sys 0m0.289s 00:09:55.712 13:08:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.712 13:08:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:55.712 ************************************ 00:09:55.712 END TEST bdev_write_zeroes 00:09:55.712 ************************************ 00:09:55.971 13:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:55.971 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:55.971 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.971 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:55.971 ************************************ 00:09:55.971 START TEST bdev_json_nonenclosed 00:09:55.971 ************************************ 00:09:55.971 13:08:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:55.971 [2024-07-15 13:08:52.564258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:55.971 [2024-07-15 13:08:52.564445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79922 ] 00:09:56.229 [2024-07-15 13:08:52.712150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.229 [2024-07-15 13:08:52.806043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.229 [2024-07-15 13:08:52.806168] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:56.229 [2024-07-15 13:08:52.806204] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:56.229 [2024-07-15 13:08:52.806235] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.229 00:09:56.229 real 0m0.451s 00:09:56.229 user 0m0.224s 00:09:56.229 sys 0m0.122s 00:09:56.229 13:08:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.229 13:08:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:56.229 ************************************ 00:09:56.229 END TEST bdev_json_nonenclosed 00:09:56.229 ************************************ 00:09:56.488 13:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.488 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:56.488 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.488 13:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.488 ************************************ 00:09:56.488 START TEST bdev_json_nonarray 00:09:56.488 ************************************ 00:09:56.488 13:08:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.488 [2024-07-15 13:08:53.087398] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:56.488 [2024-07-15 13:08:53.087630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79942 ] 00:09:56.746 [2024-07-15 13:08:53.240165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.746 [2024-07-15 13:08:53.338958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.746 [2024-07-15 13:08:53.339100] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:56.746 [2024-07-15 13:08:53.339133] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:56.746 [2024-07-15 13:08:53.339170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.746 00:09:56.746 real 0m0.491s 00:09:56.746 user 0m0.249s 00:09:56.746 sys 0m0.136s 00:09:56.746 13:08:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.746 13:08:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:56.746 ************************************ 00:09:56.746 END TEST bdev_json_nonarray 00:09:56.746 ************************************ 00:09:57.003 13:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:57.003 13:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:57.003 13:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:57.003 13:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:57.003 13:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:57.003 13:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:57.003 ************************************ 00:09:57.003 START TEST bdev_gpt_uuid 00:09:57.003 ************************************ 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79973 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79973 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 79973 ']' 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:57.003 13:08:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:57.003 [2024-07-15 13:08:53.647605] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:57.003 [2024-07-15 13:08:53.647891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79973 ] 00:09:57.261 [2024-07-15 13:08:53.794596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.261 [2024-07-15 13:08:53.892598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 Some configs were skipped because the RPC state that can call them passed over. 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:58.196 { 00:09:58.196 "name": "Nvme0n1p1", 00:09:58.196 "aliases": [ 00:09:58.196 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:58.196 ], 00:09:58.196 "product_name": "GPT Disk", 00:09:58.196 "block_size": 4096, 00:09:58.196 "num_blocks": 774144, 00:09:58.196 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:58.196 "md_size": 64, 00:09:58.196 "md_interleave": false, 00:09:58.196 "dif_type": 0, 00:09:58.196 "assigned_rate_limits": { 00:09:58.196 "rw_ios_per_sec": 0, 00:09:58.196 "rw_mbytes_per_sec": 0, 00:09:58.196 "r_mbytes_per_sec": 0, 00:09:58.196 "w_mbytes_per_sec": 0 00:09:58.196 }, 00:09:58.196 "claimed": false, 00:09:58.196 "zoned": false, 00:09:58.196 "supported_io_types": { 00:09:58.196 "read": true, 00:09:58.196 "write": true, 00:09:58.196 "unmap": true, 00:09:58.196 "write_zeroes": true, 00:09:58.196 "flush": true, 00:09:58.196 "reset": true, 00:09:58.196 "compare": true, 00:09:58.196 "compare_and_write": false, 00:09:58.196 "abort": true, 00:09:58.196 "nvme_admin": false, 00:09:58.196 "nvme_io": false 00:09:58.196 }, 00:09:58.196 "driver_specific": { 00:09:58.196 "gpt": { 00:09:58.196 "base_bdev": "Nvme0n1", 00:09:58.196 "offset_blocks": 256, 00:09:58.196 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:58.196 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:58.196 "partition_name": "SPDK_TEST_first" 00:09:58.196 } 00:09:58.196 } 00:09:58.196 } 00:09:58.196 ]' 00:09:58.196 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:58.455 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:58.455 13:08:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:58.455 { 00:09:58.455 "name": "Nvme0n1p2", 00:09:58.455 "aliases": [ 00:09:58.455 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:58.455 ], 00:09:58.455 "product_name": "GPT Disk", 00:09:58.455 "block_size": 4096, 00:09:58.455 "num_blocks": 774143, 00:09:58.455 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:58.455 "md_size": 64, 00:09:58.455 "md_interleave": false, 00:09:58.455 "dif_type": 0, 00:09:58.455 "assigned_rate_limits": { 00:09:58.455 "rw_ios_per_sec": 0, 00:09:58.455 "rw_mbytes_per_sec": 0, 00:09:58.455 "r_mbytes_per_sec": 0, 00:09:58.455 "w_mbytes_per_sec": 0 00:09:58.455 }, 00:09:58.455 "claimed": false, 00:09:58.455 "zoned": false, 00:09:58.455 "supported_io_types": { 00:09:58.455 "read": true, 00:09:58.455 "write": true, 00:09:58.455 "unmap": true, 00:09:58.455 "write_zeroes": true, 00:09:58.455 "flush": true, 00:09:58.455 "reset": true, 00:09:58.455 "compare": true, 00:09:58.455 "compare_and_write": false, 00:09:58.455 "abort": true, 00:09:58.455 "nvme_admin": false, 00:09:58.455 "nvme_io": false 00:09:58.455 }, 00:09:58.455 "driver_specific": { 00:09:58.455 "gpt": { 00:09:58.455 "base_bdev": "Nvme0n1", 00:09:58.455 "offset_blocks": 774400, 00:09:58.455 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:58.455 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:58.455 "partition_name": "SPDK_TEST_second" 00:09:58.455 } 00:09:58.455 } 00:09:58.455 } 00:09:58.455 ]' 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:58.455 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79973 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 79973 ']' 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 79973 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79973 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:58.714 killing process with pid 79973 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79973' 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 79973 00:09:58.714 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 79973 00:09:59.281 00:09:59.281 real 0m2.221s 00:09:59.281 user 0m2.477s 00:09:59.281 sys 0m0.511s 00:09:59.281 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.281 13:08:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:59.281 ************************************ 00:09:59.281 END TEST bdev_gpt_uuid 00:09:59.281 ************************************ 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:59.281 13:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.798 Waiting for block devices as requested 00:09:59.798 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.798 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.798 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.056 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.318 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:05.318 13:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:10:05.318 13:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:10:05.318 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:05.318 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:05.318 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:05.318 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:10:05.318 13:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:05.318 00:10:05.318 real 0m54.028s 00:10:05.318 user 1m8.324s 00:10:05.318 sys 0m10.343s 00:10:05.318 13:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.318 13:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:05.318 ************************************ 00:10:05.318 END TEST blockdev_nvme_gpt 00:10:05.318 ************************************ 00:10:05.318 13:09:01 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:05.319 13:09:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:05.319 13:09:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.319 13:09:01 -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 ************************************ 00:10:05.319 START TEST nvme 00:10:05.319 ************************************ 00:10:05.319 13:09:02 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:05.609 * Looking for test storage... 00:10:05.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.609 13:09:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:06.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.741 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.741 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.741 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.741 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.741 13:09:03 nvme -- nvme/nvme.sh@79 -- # uname 00:10:06.741 13:09:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:06.741 13:09:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:06.741 13:09:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:10:06.741 Waiting for stub to ready for secondary processes... 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1067 -- # stubpid=80587 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80587 ]] 00:10:06.741 13:09:03 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:06.741 [2024-07-15 13:09:03.421839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:06.741 [2024-07-15 13:09:03.422051] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:07.677 13:09:04 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:07.677 13:09:04 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80587 ]] 00:10:07.677 13:09:04 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:08.244 [2024-07-15 13:09:04.709098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.244 [2024-07-15 13:09:04.784415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.244 [2024-07-15 13:09:04.784439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.244 [2024-07-15 13:09:04.784479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.244 [2024-07-15 13:09:04.800405] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:08.244 [2024-07-15 13:09:04.800462] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:08.244 [2024-07-15 13:09:04.811974] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:08.244 [2024-07-15 13:09:04.812429] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:08.244 [2024-07-15 13:09:04.813728] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:08.244 [2024-07-15 13:09:04.814234] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:08.244 [2024-07-15 13:09:04.814408] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:08.244 [2024-07-15 13:09:04.815863] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:08.244 [2024-07-15 13:09:04.816237] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:08.244 [2024-07-15 13:09:04.816325] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:08.244 [2024-07-15 13:09:04.817070] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:08.244 [2024-07-15 13:09:04.817323] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:08.244 [2024-07-15 13:09:04.817423] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:08.244 [2024-07-15 13:09:04.817495] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:08.244 [2024-07-15 13:09:04.817576] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:08.811 13:09:05 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:08.811 13:09:05 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:10:08.811 done. 00:10:08.811 13:09:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:08.811 13:09:05 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:08.811 13:09:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.811 13:09:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.811 ************************************ 00:10:08.811 START TEST nvme_reset 00:10:08.811 ************************************ 00:10:08.811 13:09:05 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:09.069 Initializing NVMe Controllers 00:10:09.069 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:09.069 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:09.069 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:09.069 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:09.069 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:09.069 ************************************ 00:10:09.069 END TEST nvme_reset 00:10:09.069 ************************************ 00:10:09.069 00:10:09.069 real 0m0.268s 00:10:09.069 user 0m0.092s 00:10:09.069 sys 0m0.121s 00:10:09.069 13:09:05 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:09.069 13:09:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:09.069 13:09:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:09.069 13:09:05 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:09.069 13:09:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.069 13:09:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.069 ************************************ 00:10:09.069 START TEST nvme_identify 00:10:09.069 ************************************ 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:10:09.069 13:09:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:09.069 13:09:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:09.069 13:09:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:09.069 13:09:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:09.069 13:09:05 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:09.069 13:09:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:09.330 [2024-07-15 13:09:05.993757] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80620 terminated unexpected 00:10:09.330 ===================================================== 00:10:09.330 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:09.330 ===================================================== 00:10:09.330 Controller Capabilities/Features 00:10:09.330 ================================ 00:10:09.330 Vendor ID: 1b36 00:10:09.330 Subsystem Vendor ID: 1af4 00:10:09.330 Serial Number: 12340 00:10:09.330 Model Number: QEMU NVMe Ctrl 00:10:09.330 Firmware Version: 8.0.0 00:10:09.330 Recommended Arb Burst: 6 00:10:09.330 IEEE OUI Identifier: 00 54 52 00:10:09.330 Multi-path I/O 00:10:09.330 May have multiple subsystem ports: No 00:10:09.330 May have multiple controllers: No 00:10:09.330 Associated with SR-IOV VF: No 00:10:09.330 Max Data Transfer Size: 524288 00:10:09.330 Max Number of Namespaces: 256 00:10:09.330 Max Number of I/O Queues: 64 00:10:09.330 NVMe Specification Version (VS): 1.4 00:10:09.330 NVMe Specification Version (Identify): 1.4 00:10:09.330 Maximum Queue Entries: 2048 00:10:09.330 Contiguous Queues Required: Yes 00:10:09.330 Arbitration Mechanisms Supported 00:10:09.330 Weighted Round Robin: Not Supported 00:10:09.330 Vendor Specific: Not Supported 00:10:09.330 Reset Timeout: 7500 ms 00:10:09.330 Doorbell Stride: 4 bytes 00:10:09.330 NVM Subsystem Reset: Not Supported 00:10:09.330 Command Sets Supported 00:10:09.330 NVM Command Set: Supported 00:10:09.330 Boot Partition: Not Supported 00:10:09.330 Memory Page Size Minimum: 4096 bytes 00:10:09.330 Memory Page Size Maximum: 65536 bytes 00:10:09.330 Persistent Memory Region: Not Supported 00:10:09.330 Optional Asynchronous Events Supported 00:10:09.330 Namespace Attribute Notices: Supported 00:10:09.330 Firmware Activation Notices: Not Supported 00:10:09.330 ANA Change Notices: Not Supported 00:10:09.330 PLE Aggregate Log Change Notices: Not Supported 00:10:09.330 LBA Status Info Alert Notices: Not Supported 00:10:09.330 EGE Aggregate Log Change Notices: Not Supported 00:10:09.330 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.330 Zone Descriptor Change Notices: Not Supported 00:10:09.330 Discovery Log Change Notices: Not Supported 00:10:09.330 Controller Attributes 00:10:09.330 128-bit Host Identifier: Not Supported 00:10:09.330 Non-Operational Permissive Mode: Not Supported 00:10:09.330 NVM Sets: Not Supported 00:10:09.330 Read Recovery Levels: Not Supported 00:10:09.330 Endurance Groups: Not Supported 00:10:09.330 Predictable Latency Mode: Not Supported 00:10:09.330 Traffic Based Keep ALive: Not Supported 00:10:09.330 Namespace Granularity: Not Supported 00:10:09.330 SQ Associations: Not Supported 00:10:09.330 UUID List: Not Supported 00:10:09.330 Multi-Domain Subsystem: Not Supported 00:10:09.330 Fixed Capacity Management: Not Supported 00:10:09.330 Variable Capacity Management: Not Supported 00:10:09.330 Delete Endurance Group: Not Supported 00:10:09.330 Delete NVM Set: Not Supported 00:10:09.330 Extended LBA Formats Supported: Supported 00:10:09.330 Flexible Data Placement Supported: Not Supported 00:10:09.330 00:10:09.330 Controller Memory Buffer Support 00:10:09.330 ================================ 00:10:09.330 Supported: No 00:10:09.330 00:10:09.330 Persistent Memory Region Support 00:10:09.330 ================================ 00:10:09.330 Supported: No 00:10:09.330 00:10:09.330 Admin Command Set Attributes 00:10:09.330 ============================ 00:10:09.330 Security Send/Receive: Not Supported 00:10:09.330 Format NVM: Supported 00:10:09.330 Firmware Activate/Download: Not Supported 00:10:09.330 Namespace Management: Supported 00:10:09.330 Device Self-Test: Not Supported 00:10:09.330 Directives: Supported 00:10:09.330 NVMe-MI: Not Supported 00:10:09.330 Virtualization Management: Not Supported 00:10:09.330 Doorbell Buffer Config: Supported 00:10:09.330 Get LBA Status Capability: Not Supported 00:10:09.330 Command & Feature Lockdown Capability: Not Supported 00:10:09.330 Abort Command Limit: 4 00:10:09.330 Async Event Request Limit: 4 00:10:09.330 Number of Firmware Slots: N/A 00:10:09.330 Firmware Slot 1 Read-Only: N/A 00:10:09.330 Firmware Activation Without Reset: N/A 00:10:09.330 Multiple Update Detection Support: N/A 00:10:09.330 Firmware Update Granularity: No Information Provided 00:10:09.330 Per-Namespace SMART Log: Yes 00:10:09.330 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.330 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:09.330 Command Effects Log Page: Supported 00:10:09.330 Get Log Page Extended Data: Supported 00:10:09.330 Telemetry Log Pages: Not Supported 00:10:09.330 Persistent Event Log Pages: Not Supported 00:10:09.330 Supported Log Pages Log Page: May Support 00:10:09.330 Commands Supported & Effects Log Page: Not Supported 00:10:09.330 Feature Identifiers & Effects Log Page:May Support 00:10:09.330 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.330 Data Area 4 for Telemetry Log: Not Supported 00:10:09.330 Error Log Page Entries Supported: 1 00:10:09.330 Keep Alive: Not Supported 00:10:09.330 00:10:09.330 NVM Command Set Attributes 00:10:09.330 ========================== 00:10:09.330 Submission Queue Entry Size 00:10:09.330 Max: 64 00:10:09.330 Min: 64 00:10:09.330 Completion Queue Entry Size 00:10:09.330 Max: 16 00:10:09.330 Min: 16 00:10:09.330 Number of Namespaces: 256 00:10:09.330 Compare Command: Supported 00:10:09.330 Write Uncorrectable Command: Not Supported 00:10:09.330 Dataset Management Command: Supported 00:10:09.330 Write Zeroes Command: Supported 00:10:09.330 Set Features Save Field: Supported 00:10:09.330 Reservations: Not Supported 00:10:09.330 Timestamp: Supported 00:10:09.330 Copy: Supported 00:10:09.330 Volatile Write Cache: Present 00:10:09.330 Atomic Write Unit (Normal): 1 00:10:09.330 Atomic Write Unit (PFail): 1 00:10:09.330 Atomic Compare & Write Unit: 1 00:10:09.330 Fused Compare & Write: Not Supported 00:10:09.330 Scatter-Gather List 00:10:09.330 SGL Command Set: Supported 00:10:09.330 SGL Keyed: Not Supported 00:10:09.330 SGL Bit Bucket Descriptor: Not Supported 00:10:09.330 SGL Metadata Pointer: Not Supported 00:10:09.330 Oversized SGL: Not Supported 00:10:09.330 SGL Metadata Address: Not Supported 00:10:09.330 SGL Offset: Not Supported 00:10:09.330 Transport SGL Data Block: Not Supported 00:10:09.330 Replay Protected Memory Block: Not Supported 00:10:09.330 00:10:09.330 Firmware Slot Information 00:10:09.330 ========================= 00:10:09.330 Active slot: 1 00:10:09.330 Slot 1 Firmware Revision: 1.0 00:10:09.330 00:10:09.330 00:10:09.330 Commands Supported and Effects 00:10:09.330 ============================== 00:10:09.330 Admin Commands 00:10:09.330 -------------- 00:10:09.330 Delete I/O Submission Queue (00h): Supported 00:10:09.330 Create I/O Submission Queue (01h): Supported 00:10:09.330 Get Log Page (02h): Supported 00:10:09.330 Delete I/O Completion Queue (04h): Supported 00:10:09.330 Create I/O Completion Queue (05h): Supported 00:10:09.330 Identify (06h): Supported 00:10:09.330 Abort (08h): Supported 00:10:09.330 Set Features (09h): Supported 00:10:09.330 Get Features (0Ah): Supported 00:10:09.330 Asynchronous Event Request (0Ch): Supported 00:10:09.330 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.330 Directive Send (19h): Supported 00:10:09.331 Directive Receive (1Ah): Supported 00:10:09.331 Virtualization Management (1Ch): Supported 00:10:09.331 Doorbell Buffer Config (7Ch): Supported 00:10:09.331 Format NVM (80h): Supported LBA-Change 00:10:09.331 I/O Commands 00:10:09.331 ------------ 00:10:09.331 Flush (00h): Supported LBA-Change 00:10:09.331 Write (01h): Supported LBA-Change 00:10:09.331 Read (02h): Supported 00:10:09.331 Compare (05h): Supported 00:10:09.331 Write Zeroes (08h): Supported LBA-Change 00:10:09.331 Dataset Management (09h): Supported LBA-Change 00:10:09.331 Unknown (0Ch): Supported 00:10:09.331 Unknown (12h): Supported 00:10:09.331 Copy (19h): Supported LBA-Change 00:10:09.331 Unknown (1Dh): Supported LBA-Change 00:10:09.331 00:10:09.331 Error Log 00:10:09.331 ========= 00:10:09.331 00:10:09.331 Arbitration 00:10:09.331 =========== 00:10:09.331 Arbitration Burst: no limit 00:10:09.331 00:10:09.331 Power Management 00:10:09.331 ================ 00:10:09.331 Number of Power States: 1 00:10:09.331 Current Power State: Power State #0 00:10:09.331 Power State #0: 00:10:09.331 Max Power: 25.00 W 00:10:09.331 Non-Operational State: Operational 00:10:09.331 Entry Latency: 16 microseconds 00:10:09.331 Exit Latency: 4 microseconds 00:10:09.331 Relative Read Throughput: 0 00:10:09.331 Relative Read Latency: 0 00:10:09.331 Relative Write Throughput: 0 00:10:09.331 Relative Write Latency: 0 00:10:09.331 Idle Power[2024-07-15 13:09:05.995226] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80620 terminated unexpected 00:10:09.331 : Not Reported 00:10:09.331 Active Power: Not Reported 00:10:09.331 Non-Operational Permissive Mode: Not Supported 00:10:09.331 00:10:09.331 Health Information 00:10:09.331 ================== 00:10:09.331 Critical Warnings: 00:10:09.331 Available Spare Space: OK 00:10:09.331 Temperature: OK 00:10:09.331 Device Reliability: OK 00:10:09.331 Read Only: No 00:10:09.331 Volatile Memory Backup: OK 00:10:09.331 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.331 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.331 Available Spare: 0% 00:10:09.331 Available Spare Threshold: 0% 00:10:09.331 Life Percentage Used: 0% 00:10:09.331 Data Units Read: 1025 00:10:09.331 Data Units Written: 857 00:10:09.331 Host Read Commands: 48828 00:10:09.331 Host Write Commands: 47322 00:10:09.331 Controller Busy Time: 0 minutes 00:10:09.331 Power Cycles: 0 00:10:09.331 Power On Hours: 0 hours 00:10:09.331 Unsafe Shutdowns: 0 00:10:09.331 Unrecoverable Media Errors: 0 00:10:09.331 Lifetime Error Log Entries: 0 00:10:09.331 Warning Temperature Time: 0 minutes 00:10:09.331 Critical Temperature Time: 0 minutes 00:10:09.331 00:10:09.331 Number of Queues 00:10:09.331 ================ 00:10:09.331 Number of I/O Submission Queues: 64 00:10:09.331 Number of I/O Completion Queues: 64 00:10:09.331 00:10:09.331 ZNS Specific Controller Data 00:10:09.331 ============================ 00:10:09.331 Zone Append Size Limit: 0 00:10:09.331 00:10:09.331 00:10:09.331 Active Namespaces 00:10:09.331 ================= 00:10:09.331 Namespace ID:1 00:10:09.331 Error Recovery Timeout: Unlimited 00:10:09.331 Command Set Identifier: NVM (00h) 00:10:09.331 Deallocate: Supported 00:10:09.331 Deallocated/Unwritten Error: Supported 00:10:09.331 Deallocated Read Value: All 0x00 00:10:09.331 Deallocate in Write Zeroes: Not Supported 00:10:09.331 Deallocated Guard Field: 0xFFFF 00:10:09.331 Flush: Supported 00:10:09.331 Reservation: Not Supported 00:10:09.331 Metadata Transferred as: Separate Metadata Buffer 00:10:09.331 Namespace Sharing Capabilities: Private 00:10:09.331 Size (in LBAs): 1548666 (5GiB) 00:10:09.331 Capacity (in LBAs): 1548666 (5GiB) 00:10:09.331 Utilization (in LBAs): 1548666 (5GiB) 00:10:09.331 Thin Provisioning: Not Supported 00:10:09.331 Per-NS Atomic Units: No 00:10:09.331 Maximum Single Source Range Length: 128 00:10:09.331 Maximum Copy Length: 128 00:10:09.331 Maximum Source Range Count: 128 00:10:09.331 NGUID/EUI64 Never Reused: No 00:10:09.331 Namespace Write Protected: No 00:10:09.331 Number of LBA Formats: 8 00:10:09.331 Current LBA Format: LBA Format #07 00:10:09.331 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.331 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.331 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.331 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.331 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.331 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.331 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.331 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.331 00:10:09.331 ===================================================== 00:10:09.331 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:09.331 ===================================================== 00:10:09.331 Controller Capabilities/Features 00:10:09.331 ================================ 00:10:09.331 Vendor ID: 1b36 00:10:09.331 Subsystem Vendor ID: 1af4 00:10:09.331 Serial Number: 12341 00:10:09.331 Model Number: QEMU NVMe Ctrl 00:10:09.331 Firmware Version: 8.0.0 00:10:09.331 Recommended Arb Burst: 6 00:10:09.331 IEEE OUI Identifier: 00 54 52 00:10:09.331 Multi-path I/O 00:10:09.331 May have multiple subsystem ports: No 00:10:09.331 May have multiple controllers: No 00:10:09.331 Associated with SR-IOV VF: No 00:10:09.331 Max Data Transfer Size: 524288 00:10:09.331 Max Number of Namespaces: 256 00:10:09.331 Max Number of I/O Queues: 64 00:10:09.331 NVMe Specification Version (VS): 1.4 00:10:09.331 NVMe Specification Version (Identify): 1.4 00:10:09.331 Maximum Queue Entries: 2048 00:10:09.331 Contiguous Queues Required: Yes 00:10:09.331 Arbitration Mechanisms Supported 00:10:09.331 Weighted Round Robin: Not Supported 00:10:09.331 Vendor Specific: Not Supported 00:10:09.331 Reset Timeout: 7500 ms 00:10:09.331 Doorbell Stride: 4 bytes 00:10:09.331 NVM Subsystem Reset: Not Supported 00:10:09.331 Command Sets Supported 00:10:09.331 NVM Command Set: Supported 00:10:09.331 Boot Partition: Not Supported 00:10:09.331 Memory Page Size Minimum: 4096 bytes 00:10:09.331 Memory Page Size Maximum: 65536 bytes 00:10:09.331 Persistent Memory Region: Not Supported 00:10:09.331 Optional Asynchronous Events Supported 00:10:09.331 Namespace Attribute Notices: Supported 00:10:09.331 Firmware Activation Notices: Not Supported 00:10:09.331 ANA Change Notices: Not Supported 00:10:09.331 PLE Aggregate Log Change Notices: Not Supported 00:10:09.331 LBA Status Info Alert Notices: Not Supported 00:10:09.331 EGE Aggregate Log Change Notices: Not Supported 00:10:09.331 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.331 Zone Descriptor Change Notices: Not Supported 00:10:09.331 Discovery Log Change Notices: Not Supported 00:10:09.331 Controller Attributes 00:10:09.331 128-bit Host Identifier: Not Supported 00:10:09.331 Non-Operational Permissive Mode: Not Supported 00:10:09.331 NVM Sets: Not Supported 00:10:09.331 Read Recovery Levels: Not Supported 00:10:09.331 Endurance Groups: Not Supported 00:10:09.331 Predictable Latency Mode: Not Supported 00:10:09.331 Traffic Based Keep ALive: Not Supported 00:10:09.331 Namespace Granularity: Not Supported 00:10:09.331 SQ Associations: Not Supported 00:10:09.331 UUID List: Not Supported 00:10:09.331 Multi-Domain Subsystem: Not Supported 00:10:09.331 Fixed Capacity Management: Not Supported 00:10:09.331 Variable Capacity Management: Not Supported 00:10:09.331 Delete Endurance Group: Not Supported 00:10:09.331 Delete NVM Set: Not Supported 00:10:09.331 Extended LBA Formats Supported: Supported 00:10:09.331 Flexible Data Placement Supported: Not Supported 00:10:09.331 00:10:09.331 Controller Memory Buffer Support 00:10:09.331 ================================ 00:10:09.331 Supported: No 00:10:09.331 00:10:09.331 Persistent Memory Region Support 00:10:09.331 ================================ 00:10:09.331 Supported: No 00:10:09.331 00:10:09.331 Admin Command Set Attributes 00:10:09.331 ============================ 00:10:09.331 Security Send/Receive: Not Supported 00:10:09.331 Format NVM: Supported 00:10:09.331 Firmware Activate/Download: Not Supported 00:10:09.331 Namespace Management: Supported 00:10:09.331 Device Self-Test: Not Supported 00:10:09.331 Directives: Supported 00:10:09.331 NVMe-MI: Not Supported 00:10:09.331 Virtualization Management: Not Supported 00:10:09.331 Doorbell Buffer Config: Supported 00:10:09.331 Get LBA Status Capability: Not Supported 00:10:09.331 Command & Feature Lockdown Capability: Not Supported 00:10:09.331 Abort Command Limit: 4 00:10:09.331 Async Event Request Limit: 4 00:10:09.331 Number of Firmware Slots: N/A 00:10:09.331 Firmware Slot 1 Read-Only: N/A 00:10:09.331 Firmware Activation Without Reset: N/A 00:10:09.331 Multiple Update Detection Support: N/A 00:10:09.332 Firmware Update Granularity: No Information Provided 00:10:09.332 Per-Namespace SMART Log: Yes 00:10:09.332 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.332 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:09.332 Command Effects Log Page: Supported 00:10:09.332 Get Log Page Extended Data: Supported 00:10:09.332 Telemetry Log Pages: Not Supported 00:10:09.332 Persistent Event Log Pages: Not Supported 00:10:09.332 Supported Log Pages Log Page: May Support 00:10:09.332 Commands Supported & Effects Log Page: Not Supported 00:10:09.332 Feature Identifiers & Effects Log Page:May Support 00:10:09.332 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.332 Data Area 4 for Telemetry Log: Not Supported 00:10:09.332 Error Log Page Entries Supported: 1 00:10:09.332 Keep Alive: Not Supported 00:10:09.332 00:10:09.332 NVM Command Set Attributes 00:10:09.332 ========================== 00:10:09.332 Submission Queue Entry Size 00:10:09.332 Max: 64 00:10:09.332 Min: 64 00:10:09.332 Completion Queue Entry Size 00:10:09.332 Max: 16 00:10:09.332 Min: 16 00:10:09.332 Number of Namespaces: 256 00:10:09.332 Compare Command: Supported 00:10:09.332 Write Uncorrectable Command: Not Supported 00:10:09.332 Dataset Management Command: Supported 00:10:09.332 Write Zeroes Command: Supported 00:10:09.332 Set Features Save Field: Supported 00:10:09.332 Reservations: Not Supported 00:10:09.332 Timestamp: Supported 00:10:09.332 Copy: Supported 00:10:09.332 Volatile Write Cache: Present 00:10:09.332 Atomic Write Unit (Normal): 1 00:10:09.332 Atomic Write Unit (PFail): 1 00:10:09.332 Atomic Compare & Write Unit: 1 00:10:09.332 Fused Compare & Write: Not Supported 00:10:09.332 Scatter-Gather List 00:10:09.332 SGL Command Set: Supported 00:10:09.332 SGL Keyed: Not Supported 00:10:09.332 SGL Bit Bucket Descriptor: Not Supported 00:10:09.332 SGL Metadata Pointer: Not Supported 00:10:09.332 Oversized SGL: Not Supported 00:10:09.332 SGL Metadata Address: Not Supported 00:10:09.332 SGL Offset: Not Supported 00:10:09.332 Transport SGL Data Block: Not Supported 00:10:09.332 Replay Protected Memory Block: Not Supported 00:10:09.332 00:10:09.332 Firmware Slot Information 00:10:09.332 ========================= 00:10:09.332 Active slot: 1 00:10:09.332 Slot 1 Firmware Revision: 1.0 00:10:09.332 00:10:09.332 00:10:09.332 Commands Supported and Effects 00:10:09.332 ============================== 00:10:09.332 Admin Commands 00:10:09.332 -------------- 00:10:09.332 Delete I/O Submission Queue (00h): Supported 00:10:09.332 Create I/O Submission Queue (01h): Supported 00:10:09.332 Get Log Page (02h): Supported 00:10:09.332 Delete I/O Completion Queue (04h): Supported 00:10:09.332 Create I/O Completion Queue (05h): Supported 00:10:09.332 Identify (06h): Supported 00:10:09.332 Abort (08h): Supported 00:10:09.332 Set Features (09h): Supported 00:10:09.332 Get Features (0Ah): Supported 00:10:09.332 Asynchronous Event Request (0Ch): Supported 00:10:09.332 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.332 Directive Send (19h): Supported 00:10:09.332 Directive Receive (1Ah): Supported 00:10:09.332 Virtualization Management (1Ch): Supported 00:10:09.332 Doorbell Buffer Config (7Ch): Supported 00:10:09.332 Format NVM (80h): Supported LBA-Change 00:10:09.332 I/O Commands 00:10:09.332 ------------ 00:10:09.332 Flush (00h): Supported LBA-Change 00:10:09.332 Write (01h): Supported LBA-Change 00:10:09.332 Read (02h): Supported 00:10:09.332 Compare (05h): Supported 00:10:09.332 Write Zeroes (08h): Supported LBA-Change 00:10:09.332 Dataset Management (09h): Supported LBA-Change 00:10:09.332 Unknown (0Ch): Supported 00:10:09.332 Unknown (12h): Supported 00:10:09.332 Copy (19h): Supported LBA-Change 00:10:09.332 Unknown (1Dh): Supported LBA-Change 00:10:09.332 00:10:09.332 Error Log 00:10:09.332 ========= 00:10:09.332 00:10:09.332 Arbitration 00:10:09.332 =========== 00:10:09.332 Arbitration Burst: no limit 00:10:09.332 00:10:09.332 Power Management 00:10:09.332 ================ 00:10:09.332 Number of Power States: 1 00:10:09.332 Current Power State: Power State #0 00:10:09.332 Power State #0: 00:10:09.332 Max Power: 25.00 W 00:10:09.332 Non-Operational State: Operational 00:10:09.332 Entry Latency: 16 microseconds 00:10:09.332 Exit Latency: 4 microseconds 00:10:09.332 Relative Read Throughput: 0 00:10:09.332 Relative Read Latency: 0 00:10:09.332 Relative Write Throughput: 0 00:10:09.332 Relative Write Latency: 0 00:10:09.332 Idle Power: Not Reported 00:10:09.332 Active Power: Not Reported 00:10:09.332 Non-Operational Permissive Mode: Not Supported 00:10:09.332 00:10:09.332 Health Information 00:10:09.332 ================== 00:10:09.332 Critical Warnings: 00:10:09.332 Available Spare Space: OK 00:10:09.332 Temperature: OK 00:10:09.332 Device Reliability: OK 00:10:09.332 Read Only: No 00:10:09.332 Volatile Memory Backup: OK 00:10:09.332 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.332 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.332 Available Spare: 0% 00:10:09.332 Available Spare Threshold: 0% 00:10:09.332 Life Percentage Used: 0% 00:10:09.332 Data Units Read: 762 00:10:09.332 Data Units Written: 613 00:10:09.332 Host Read Commands: 34902 00:10:09.332 Host Write Commands: 32675 00:10:09.332 Controller Busy Time: 0 minutes 00:10:09.332 Power Cycles: 0 00:10:09.332 Power On Hours: 0 hours 00:10:09.332 Unsafe Shutdowns: 0 00:10:09.332 Unrecoverable Media Errors: 0 00:10:09.332 Lifetime Error Log Entries: 0 00:10:09.332 Warning Temperature Time: 0 minutes 00:10:09.332 Critical Temperature Time: 0 minutes 00:10:09.332 00:10:09.332 Number of Queues 00:10:09.332 ================ 00:10:09.332 Number of I/O Submission Queues: 64 00:10:09.332 Number of I/O Completion Queues: 64 00:10:09.332 00:10:09.332 ZNS Specific Controller Data 00:10:09.332 ============================ 00:10:09.332 Zone Append Size Limit: 0 00:10:09.332 00:10:09.332 00:10:09.332 Active Namespaces 00:10:09.332 ================= 00:10:09.332 Namespace ID:1 00:10:09.332 Error Recovery Timeout: Unlimited 00:10:09.332 Command Set Identifier: [2024-07-15 13:09:05.996888] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80620 terminated unexpected 00:10:09.332 NVM (00h) 00:10:09.332 Deallocate: Supported 00:10:09.332 Deallocated/Unwritten Error: Supported 00:10:09.332 Deallocated Read Value: All 0x00 00:10:09.332 Deallocate in Write Zeroes: Not Supported 00:10:09.332 Deallocated Guard Field: 0xFFFF 00:10:09.332 Flush: Supported 00:10:09.332 Reservation: Not Supported 00:10:09.332 Namespace Sharing Capabilities: Private 00:10:09.332 Size (in LBAs): 1310720 (5GiB) 00:10:09.332 Capacity (in LBAs): 1310720 (5GiB) 00:10:09.332 Utilization (in LBAs): 1310720 (5GiB) 00:10:09.332 Thin Provisioning: Not Supported 00:10:09.332 Per-NS Atomic Units: No 00:10:09.332 Maximum Single Source Range Length: 128 00:10:09.332 Maximum Copy Length: 128 00:10:09.332 Maximum Source Range Count: 128 00:10:09.332 NGUID/EUI64 Never Reused: No 00:10:09.332 Namespace Write Protected: No 00:10:09.332 Number of LBA Formats: 8 00:10:09.332 Current LBA Format: LBA Format #04 00:10:09.332 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.332 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.332 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.332 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.332 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.332 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.332 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.332 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.332 00:10:09.332 ===================================================== 00:10:09.332 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:09.332 ===================================================== 00:10:09.332 Controller Capabilities/Features 00:10:09.332 ================================ 00:10:09.332 Vendor ID: 1b36 00:10:09.332 Subsystem Vendor ID: 1af4 00:10:09.332 Serial Number: 12343 00:10:09.332 Model Number: QEMU NVMe Ctrl 00:10:09.332 Firmware Version: 8.0.0 00:10:09.332 Recommended Arb Burst: 6 00:10:09.332 IEEE OUI Identifier: 00 54 52 00:10:09.332 Multi-path I/O 00:10:09.332 May have multiple subsystem ports: No 00:10:09.332 May have multiple controllers: Yes 00:10:09.332 Associated with SR-IOV VF: No 00:10:09.332 Max Data Transfer Size: 524288 00:10:09.332 Max Number of Namespaces: 256 00:10:09.332 Max Number of I/O Queues: 64 00:10:09.332 NVMe Specification Version (VS): 1.4 00:10:09.332 NVMe Specification Version (Identify): 1.4 00:10:09.332 Maximum Queue Entries: 2048 00:10:09.332 Contiguous Queues Required: Yes 00:10:09.332 Arbitration Mechanisms Supported 00:10:09.332 Weighted Round Robin: Not Supported 00:10:09.333 Vendor Specific: Not Supported 00:10:09.333 Reset Timeout: 7500 ms 00:10:09.333 Doorbell Stride: 4 bytes 00:10:09.333 NVM Subsystem Reset: Not Supported 00:10:09.333 Command Sets Supported 00:10:09.333 NVM Command Set: Supported 00:10:09.333 Boot Partition: Not Supported 00:10:09.333 Memory Page Size Minimum: 4096 bytes 00:10:09.333 Memory Page Size Maximum: 65536 bytes 00:10:09.333 Persistent Memory Region: Not Supported 00:10:09.333 Optional Asynchronous Events Supported 00:10:09.333 Namespace Attribute Notices: Supported 00:10:09.333 Firmware Activation Notices: Not Supported 00:10:09.333 ANA Change Notices: Not Supported 00:10:09.333 PLE Aggregate Log Change Notices: Not Supported 00:10:09.333 LBA Status Info Alert Notices: Not Supported 00:10:09.333 EGE Aggregate Log Change Notices: Not Supported 00:10:09.333 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.333 Zone Descriptor Change Notices: Not Supported 00:10:09.333 Discovery Log Change Notices: Not Supported 00:10:09.333 Controller Attributes 00:10:09.333 128-bit Host Identifier: Not Supported 00:10:09.333 Non-Operational Permissive Mode: Not Supported 00:10:09.333 NVM Sets: Not Supported 00:10:09.333 Read Recovery Levels: Not Supported 00:10:09.333 Endurance Groups: Supported 00:10:09.333 Predictable Latency Mode: Not Supported 00:10:09.333 Traffic Based Keep ALive: Not Supported 00:10:09.333 Namespace Granularity: Not Supported 00:10:09.333 SQ Associations: Not Supported 00:10:09.333 UUID List: Not Supported 00:10:09.333 Multi-Domain Subsystem: Not Supported 00:10:09.333 Fixed Capacity Management: Not Supported 00:10:09.333 Variable Capacity Management: Not Supported 00:10:09.333 Delete Endurance Group: Not Supported 00:10:09.333 Delete NVM Set: Not Supported 00:10:09.333 Extended LBA Formats Supported: Supported 00:10:09.333 Flexible Data Placement Supported: Supported 00:10:09.333 00:10:09.333 Controller Memory Buffer Support 00:10:09.333 ================================ 00:10:09.333 Supported: No 00:10:09.333 00:10:09.333 Persistent Memory Region Support 00:10:09.333 ================================ 00:10:09.333 Supported: No 00:10:09.333 00:10:09.333 Admin Command Set Attributes 00:10:09.333 ============================ 00:10:09.333 Security Send/Receive: Not Supported 00:10:09.333 Format NVM: Supported 00:10:09.333 Firmware Activate/Download: Not Supported 00:10:09.333 Namespace Management: Supported 00:10:09.333 Device Self-Test: Not Supported 00:10:09.333 Directives: Supported 00:10:09.333 NVMe-MI: Not Supported 00:10:09.333 Virtualization Management: Not Supported 00:10:09.333 Doorbell Buffer Config: Supported 00:10:09.333 Get LBA Status Capability: Not Supported 00:10:09.333 Command & Feature Lockdown Capability: Not Supported 00:10:09.333 Abort Command Limit: 4 00:10:09.333 Async Event Request Limit: 4 00:10:09.333 Number of Firmware Slots: N/A 00:10:09.333 Firmware Slot 1 Read-Only: N/A 00:10:09.333 Firmware Activation Without Reset: N/A 00:10:09.333 Multiple Update Detection Support: N/A 00:10:09.333 Firmware Update Granularity: No Information Provided 00:10:09.333 Per-Namespace SMART Log: Yes 00:10:09.333 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.333 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:09.333 Command Effects Log Page: Supported 00:10:09.333 Get Log Page Extended Data: Supported 00:10:09.333 Telemetry Log Pages: Not Supported 00:10:09.333 Persistent Event Log Pages: Not Supported 00:10:09.333 Supported Log Pages Log Page: May Support 00:10:09.333 Commands Supported & Effects Log Page: Not Supported 00:10:09.333 Feature Identifiers & Effects Log Page:May Support 00:10:09.333 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.333 Data Area 4 for Telemetry Log: Not Supported 00:10:09.333 Error Log Page Entries Supported: 1 00:10:09.333 Keep Alive: Not Supported 00:10:09.333 00:10:09.333 NVM Command Set Attributes 00:10:09.333 ========================== 00:10:09.333 Submission Queue Entry Size 00:10:09.333 Max: 64 00:10:09.333 Min: 64 00:10:09.333 Completion Queue Entry Size 00:10:09.333 Max: 16 00:10:09.333 Min: 16 00:10:09.333 Number of Namespaces: 256 00:10:09.333 Compare Command: Supported 00:10:09.333 Write Uncorrectable Command: Not Supported 00:10:09.333 Dataset Management Command: Supported 00:10:09.333 Write Zeroes Command: Supported 00:10:09.333 Set Features Save Field: Supported 00:10:09.333 Reservations: Not Supported 00:10:09.333 Timestamp: Supported 00:10:09.333 Copy: Supported 00:10:09.333 Volatile Write Cache: Present 00:10:09.333 Atomic Write Unit (Normal): 1 00:10:09.333 Atomic Write Unit (PFail): 1 00:10:09.333 Atomic Compare & Write Unit: 1 00:10:09.333 Fused Compare & Write: Not Supported 00:10:09.333 Scatter-Gather List 00:10:09.333 SGL Command Set: Supported 00:10:09.333 SGL Keyed: Not Supported 00:10:09.333 SGL Bit Bucket Descriptor: Not Supported 00:10:09.333 SGL Metadata Pointer: Not Supported 00:10:09.333 Oversized SGL: Not Supported 00:10:09.333 SGL Metadata Address: Not Supported 00:10:09.333 SGL Offset: Not Supported 00:10:09.333 Transport SGL Data Block: Not Supported 00:10:09.333 Replay Protected Memory Block: Not Supported 00:10:09.333 00:10:09.333 Firmware Slot Information 00:10:09.333 ========================= 00:10:09.333 Active slot: 1 00:10:09.333 Slot 1 Firmware Revision: 1.0 00:10:09.333 00:10:09.333 00:10:09.333 Commands Supported and Effects 00:10:09.333 ============================== 00:10:09.333 Admin Commands 00:10:09.333 -------------- 00:10:09.333 Delete I/O Submission Queue (00h): Supported 00:10:09.333 Create I/O Submission Queue (01h): Supported 00:10:09.333 Get Log Page (02h): Supported 00:10:09.333 Delete I/O Completion Queue (04h): Supported 00:10:09.333 Create I/O Completion Queue (05h): Supported 00:10:09.333 Identify (06h): Supported 00:10:09.333 Abort (08h): Supported 00:10:09.333 Set Features (09h): Supported 00:10:09.333 Get Features (0Ah): Supported 00:10:09.333 Asynchronous Event Request (0Ch): Supported 00:10:09.333 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.333 Directive Send (19h): Supported 00:10:09.333 Directive Receive (1Ah): Supported 00:10:09.333 Virtualization Management (1Ch): Supported 00:10:09.333 Doorbell Buffer Config (7Ch): Supported 00:10:09.333 Format NVM (80h): Supported LBA-Change 00:10:09.333 I/O Commands 00:10:09.333 ------------ 00:10:09.333 Flush (00h): Supported LBA-Change 00:10:09.333 Write (01h): Supported LBA-Change 00:10:09.333 Read (02h): Supported 00:10:09.333 Compare (05h): Supported 00:10:09.333 Write Zeroes (08h): Supported LBA-Change 00:10:09.333 Dataset Management (09h): Supported LBA-Change 00:10:09.333 Unknown (0Ch): Supported 00:10:09.333 Unknown (12h): Supported 00:10:09.333 Copy (19h): Supported LBA-Change 00:10:09.333 Unknown (1Dh): Supported LBA-Change 00:10:09.333 00:10:09.333 Error Log 00:10:09.333 ========= 00:10:09.333 00:10:09.333 Arbitration 00:10:09.333 =========== 00:10:09.333 Arbitration Burst: no limit 00:10:09.333 00:10:09.333 Power Management 00:10:09.333 ================ 00:10:09.333 Number of Power States: 1 00:10:09.333 Current Power State: Power State #0 00:10:09.333 Power State #0: 00:10:09.333 Max Power: 25.00 W 00:10:09.333 Non-Operational State: Operational 00:10:09.333 Entry Latency: 16 microseconds 00:10:09.333 Exit Latency: 4 microseconds 00:10:09.333 Relative Read Throughput: 0 00:10:09.333 Relative Read Latency: 0 00:10:09.333 Relative Write Throughput: 0 00:10:09.333 Relative Write Latency: 0 00:10:09.333 Idle Power: Not Reported 00:10:09.333 Active Power: Not Reported 00:10:09.333 Non-Operational Permissive Mode: Not Supported 00:10:09.333 00:10:09.333 Health Information 00:10:09.333 ================== 00:10:09.333 Critical Warnings: 00:10:09.333 Available Spare Space: OK 00:10:09.333 Temperature: OK 00:10:09.333 Device Reliability: OK 00:10:09.333 Read Only: No 00:10:09.333 Volatile Memory Backup: OK 00:10:09.333 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.333 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.333 Available Spare: 0% 00:10:09.333 Available Spare Threshold: 0% 00:10:09.333 Life Percentage Used: 0% 00:10:09.333 Data Units Read: 814 00:10:09.333 Data Units Written: 708 00:10:09.333 Host Read Commands: 34894 00:10:09.333 Host Write Commands: 33484 00:10:09.333 Controller Busy Time: 0 minutes 00:10:09.333 Power Cycles: 0 00:10:09.333 Power On Hours: 0 hours 00:10:09.333 Unsafe Shutdowns: 0 00:10:09.333 Unrecoverable Media Errors: 0 00:10:09.333 Lifetime Error Log Entries: 0 00:10:09.333 Warning Temperature Time: 0 minutes 00:10:09.333 Critical Temperature Time: 0 minutes 00:10:09.333 00:10:09.333 Number of Queues 00:10:09.333 ================ 00:10:09.333 Number of I/O Submission Queues: 64 00:10:09.333 Number of I/O Completion Queues: 64 00:10:09.333 00:10:09.334 ZNS Specific Controller Data 00:10:09.334 ============================ 00:10:09.334 Zone Append Size Limit: 0 00:10:09.334 00:10:09.334 00:10:09.334 Active Namespaces 00:10:09.334 ================= 00:10:09.334 Namespace ID:1 00:10:09.334 Error Recovery Timeout: Unlimited 00:10:09.334 Command Set Identifier: NVM (00h) 00:10:09.334 Deallocate: Supported 00:10:09.334 Deallocated/Unwritten Error: Supported 00:10:09.334 Deallocated Read Value: All 0x00 00:10:09.334 Deallocate in Write Zeroes: Not Supported 00:10:09.334 Deallocated Guard Field: 0xFFFF 00:10:09.334 Flush: Supported 00:10:09.334 Reservation: Not Supported 00:10:09.334 Namespace Sharing Capabilities: Multiple Controllers 00:10:09.334 Size (in LBAs): 262144 (1GiB) 00:10:09.334 Capacity (in LBAs): 262144 (1GiB) 00:10:09.334 Utilization (in LBAs): 262144 (1GiB) 00:10:09.334 Thin Provisioning: Not Supported 00:10:09.334 Per-NS Atomic Units: No 00:10:09.334 Maximum Single Source Range Length: 128 00:10:09.334 Maximum Copy Length: 128 00:10:09.334 Maximum Source Range Count: 128 00:10:09.334 NGUID/EUI64 Never Reused: No 00:10:09.334 Namespace Write Protected: No 00:10:09.334 Endurance group ID: 1 00:10:09.334 Number of LBA Formats: 8 00:10:09.334 Current LBA Format: LBA Format #04 00:10:09.334 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.334 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.334 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.334 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.334 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.334 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.334 LBA Format #06: Data Size[2024-07-15 13:09:05.998873] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80620 terminated unexpected 00:10:09.334 : 4096 Metadata Size: 16 00:10:09.334 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.334 00:10:09.334 Get Feature FDP: 00:10:09.334 ================ 00:10:09.334 Enabled: Yes 00:10:09.334 FDP configuration index: 0 00:10:09.334 00:10:09.334 FDP configurations log page 00:10:09.334 =========================== 00:10:09.334 Number of FDP configurations: 1 00:10:09.334 Version: 0 00:10:09.334 Size: 112 00:10:09.334 FDP Configuration Descriptor: 0 00:10:09.334 Descriptor Size: 96 00:10:09.334 Reclaim Group Identifier format: 2 00:10:09.334 FDP Volatile Write Cache: Not Present 00:10:09.334 FDP Configuration: Valid 00:10:09.334 Vendor Specific Size: 0 00:10:09.334 Number of Reclaim Groups: 2 00:10:09.334 Number of Recalim Unit Handles: 8 00:10:09.334 Max Placement Identifiers: 128 00:10:09.334 Number of Namespaces Suppprted: 256 00:10:09.334 Reclaim unit Nominal Size: 6000000 bytes 00:10:09.334 Estimated Reclaim Unit Time Limit: Not Reported 00:10:09.334 RUH Desc #000: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #001: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #002: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #003: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #004: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #005: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #006: RUH Type: Initially Isolated 00:10:09.334 RUH Desc #007: RUH Type: Initially Isolated 00:10:09.334 00:10:09.334 FDP reclaim unit handle usage log page 00:10:09.334 ====================================== 00:10:09.334 Number of Reclaim Unit Handles: 8 00:10:09.334 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:09.334 RUH Usage Desc #001: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #002: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #003: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #004: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #005: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #006: RUH Attributes: Unused 00:10:09.334 RUH Usage Desc #007: RUH Attributes: Unused 00:10:09.334 00:10:09.334 FDP statistics log page 00:10:09.334 ======================= 00:10:09.334 Host bytes with metadata written: 413769728 00:10:09.334 Media bytes with metadata written: 413822976 00:10:09.334 Media bytes erased: 0 00:10:09.334 00:10:09.334 FDP events log page 00:10:09.334 =================== 00:10:09.334 Number of FDP events: 0 00:10:09.334 00:10:09.334 ===================================================== 00:10:09.334 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:09.334 ===================================================== 00:10:09.334 Controller Capabilities/Features 00:10:09.334 ================================ 00:10:09.334 Vendor ID: 1b36 00:10:09.334 Subsystem Vendor ID: 1af4 00:10:09.334 Serial Number: 12342 00:10:09.334 Model Number: QEMU NVMe Ctrl 00:10:09.334 Firmware Version: 8.0.0 00:10:09.334 Recommended Arb Burst: 6 00:10:09.334 IEEE OUI Identifier: 00 54 52 00:10:09.334 Multi-path I/O 00:10:09.334 May have multiple subsystem ports: No 00:10:09.334 May have multiple controllers: No 00:10:09.334 Associated with SR-IOV VF: No 00:10:09.334 Max Data Transfer Size: 524288 00:10:09.334 Max Number of Namespaces: 256 00:10:09.334 Max Number of I/O Queues: 64 00:10:09.334 NVMe Specification Version (VS): 1.4 00:10:09.334 NVMe Specification Version (Identify): 1.4 00:10:09.334 Maximum Queue Entries: 2048 00:10:09.334 Contiguous Queues Required: Yes 00:10:09.334 Arbitration Mechanisms Supported 00:10:09.334 Weighted Round Robin: Not Supported 00:10:09.334 Vendor Specific: Not Supported 00:10:09.334 Reset Timeout: 7500 ms 00:10:09.334 Doorbell Stride: 4 bytes 00:10:09.334 NVM Subsystem Reset: Not Supported 00:10:09.334 Command Sets Supported 00:10:09.334 NVM Command Set: Supported 00:10:09.334 Boot Partition: Not Supported 00:10:09.334 Memory Page Size Minimum: 4096 bytes 00:10:09.334 Memory Page Size Maximum: 65536 bytes 00:10:09.334 Persistent Memory Region: Not Supported 00:10:09.334 Optional Asynchronous Events Supported 00:10:09.334 Namespace Attribute Notices: Supported 00:10:09.334 Firmware Activation Notices: Not Supported 00:10:09.334 ANA Change Notices: Not Supported 00:10:09.334 PLE Aggregate Log Change Notices: Not Supported 00:10:09.334 LBA Status Info Alert Notices: Not Supported 00:10:09.334 EGE Aggregate Log Change Notices: Not Supported 00:10:09.334 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.334 Zone Descriptor Change Notices: Not Supported 00:10:09.334 Discovery Log Change Notices: Not Supported 00:10:09.334 Controller Attributes 00:10:09.334 128-bit Host Identifier: Not Supported 00:10:09.334 Non-Operational Permissive Mode: Not Supported 00:10:09.334 NVM Sets: Not Supported 00:10:09.334 Read Recovery Levels: Not Supported 00:10:09.334 Endurance Groups: Not Supported 00:10:09.334 Predictable Latency Mode: Not Supported 00:10:09.334 Traffic Based Keep ALive: Not Supported 00:10:09.334 Namespace Granularity: Not Supported 00:10:09.334 SQ Associations: Not Supported 00:10:09.334 UUID List: Not Supported 00:10:09.334 Multi-Domain Subsystem: Not Supported 00:10:09.334 Fixed Capacity Management: Not Supported 00:10:09.334 Variable Capacity Management: Not Supported 00:10:09.334 Delete Endurance Group: Not Supported 00:10:09.334 Delete NVM Set: Not Supported 00:10:09.334 Extended LBA Formats Supported: Supported 00:10:09.334 Flexible Data Placement Supported: Not Supported 00:10:09.334 00:10:09.334 Controller Memory Buffer Support 00:10:09.334 ================================ 00:10:09.334 Supported: No 00:10:09.334 00:10:09.334 Persistent Memory Region Support 00:10:09.334 ================================ 00:10:09.334 Supported: No 00:10:09.334 00:10:09.334 Admin Command Set Attributes 00:10:09.334 ============================ 00:10:09.334 Security Send/Receive: Not Supported 00:10:09.334 Format NVM: Supported 00:10:09.334 Firmware Activate/Download: Not Supported 00:10:09.334 Namespace Management: Supported 00:10:09.334 Device Self-Test: Not Supported 00:10:09.334 Directives: Supported 00:10:09.334 NVMe-MI: Not Supported 00:10:09.334 Virtualization Management: Not Supported 00:10:09.334 Doorbell Buffer Config: Supported 00:10:09.334 Get LBA Status Capability: Not Supported 00:10:09.334 Command & Feature Lockdown Capability: Not Supported 00:10:09.334 Abort Command Limit: 4 00:10:09.334 Async Event Request Limit: 4 00:10:09.334 Number of Firmware Slots: N/A 00:10:09.334 Firmware Slot 1 Read-Only: N/A 00:10:09.335 Firmware Activation Without Reset: N/A 00:10:09.335 Multiple Update Detection Support: N/A 00:10:09.335 Firmware Update Granularity: No Information Provided 00:10:09.335 Per-Namespace SMART Log: Yes 00:10:09.335 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.335 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:09.335 Command Effects Log Page: Supported 00:10:09.335 Get Log Page Extended Data: Supported 00:10:09.335 Telemetry Log Pages: Not Supported 00:10:09.335 Persistent Event Log Pages: Not Supported 00:10:09.335 Supported Log Pages Log Page: May Support 00:10:09.335 Commands Supported & Effects Log Page: Not Supported 00:10:09.335 Feature Identifiers & Effects Log Page:May Support 00:10:09.335 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.335 Data Area 4 for Telemetry Log: Not Supported 00:10:09.335 Error Log Page Entries Supported: 1 00:10:09.335 Keep Alive: Not Supported 00:10:09.335 00:10:09.335 NVM Command Set Attributes 00:10:09.335 ========================== 00:10:09.335 Submission Queue Entry Size 00:10:09.335 Max: 64 00:10:09.335 Min: 64 00:10:09.335 Completion Queue Entry Size 00:10:09.335 Max: 16 00:10:09.335 Min: 16 00:10:09.335 Number of Namespaces: 256 00:10:09.335 Compare Command: Supported 00:10:09.335 Write Uncorrectable Command: Not Supported 00:10:09.335 Dataset Management Command: Supported 00:10:09.335 Write Zeroes Command: Supported 00:10:09.335 Set Features Save Field: Supported 00:10:09.335 Reservations: Not Supported 00:10:09.335 Timestamp: Supported 00:10:09.335 Copy: Supported 00:10:09.335 Volatile Write Cache: Present 00:10:09.335 Atomic Write Unit (Normal): 1 00:10:09.335 Atomic Write Unit (PFail): 1 00:10:09.335 Atomic Compare & Write Unit: 1 00:10:09.335 Fused Compare & Write: Not Supported 00:10:09.335 Scatter-Gather List 00:10:09.335 SGL Command Set: Supported 00:10:09.335 SGL Keyed: Not Supported 00:10:09.335 SGL Bit Bucket Descriptor: Not Supported 00:10:09.335 SGL Metadata Pointer: Not Supported 00:10:09.335 Oversized SGL: Not Supported 00:10:09.335 SGL Metadata Address: Not Supported 00:10:09.335 SGL Offset: Not Supported 00:10:09.335 Transport SGL Data Block: Not Supported 00:10:09.335 Replay Protected Memory Block: Not Supported 00:10:09.335 00:10:09.335 Firmware Slot Information 00:10:09.335 ========================= 00:10:09.335 Active slot: 1 00:10:09.335 Slot 1 Firmware Revision: 1.0 00:10:09.335 00:10:09.335 00:10:09.335 Commands Supported and Effects 00:10:09.335 ============================== 00:10:09.335 Admin Commands 00:10:09.335 -------------- 00:10:09.335 Delete I/O Submission Queue (00h): Supported 00:10:09.335 Create I/O Submission Queue (01h): Supported 00:10:09.335 Get Log Page (02h): Supported 00:10:09.335 Delete I/O Completion Queue (04h): Supported 00:10:09.335 Create I/O Completion Queue (05h): Supported 00:10:09.335 Identify (06h): Supported 00:10:09.335 Abort (08h): Supported 00:10:09.335 Set Features (09h): Supported 00:10:09.335 Get Features (0Ah): Supported 00:10:09.335 Asynchronous Event Request (0Ch): Supported 00:10:09.335 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.335 Directive Send (19h): Supported 00:10:09.335 Directive Receive (1Ah): Supported 00:10:09.335 Virtualization Management (1Ch): Supported 00:10:09.335 Doorbell Buffer Config (7Ch): Supported 00:10:09.335 Format NVM (80h): Supported LBA-Change 00:10:09.335 I/O Commands 00:10:09.335 ------------ 00:10:09.335 Flush (00h): Supported LBA-Change 00:10:09.335 Write (01h): Supported LBA-Change 00:10:09.335 Read (02h): Supported 00:10:09.335 Compare (05h): Supported 00:10:09.335 Write Zeroes (08h): Supported LBA-Change 00:10:09.335 Dataset Management (09h): Supported LBA-Change 00:10:09.335 Unknown (0Ch): Supported 00:10:09.335 Unknown (12h): Supported 00:10:09.335 Copy (19h): Supported LBA-Change 00:10:09.335 Unknown (1Dh): Supported LBA-Change 00:10:09.335 00:10:09.335 Error Log 00:10:09.335 ========= 00:10:09.335 00:10:09.335 Arbitration 00:10:09.335 =========== 00:10:09.335 Arbitration Burst: no limit 00:10:09.335 00:10:09.335 Power Management 00:10:09.335 ================ 00:10:09.335 Number of Power States: 1 00:10:09.335 Current Power State: Power State #0 00:10:09.335 Power State #0: 00:10:09.335 Max Power: 25.00 W 00:10:09.335 Non-Operational State: Operational 00:10:09.335 Entry Latency: 16 microseconds 00:10:09.335 Exit Latency: 4 microseconds 00:10:09.335 Relative Read Throughput: 0 00:10:09.335 Relative Read Latency: 0 00:10:09.335 Relative Write Throughput: 0 00:10:09.335 Relative Write Latency: 0 00:10:09.335 Idle Power: Not Reported 00:10:09.335 Active Power: Not Reported 00:10:09.335 Non-Operational Permissive Mode: Not Supported 00:10:09.335 00:10:09.335 Health Information 00:10:09.335 ================== 00:10:09.335 Critical Warnings: 00:10:09.335 Available Spare Space: OK 00:10:09.335 Temperature: OK 00:10:09.335 Device Reliability: OK 00:10:09.335 Read Only: No 00:10:09.335 Volatile Memory Backup: OK 00:10:09.335 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.335 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.335 Available Spare: 0% 00:10:09.335 Available Spare Threshold: 0% 00:10:09.335 Life Percentage Used: 0% 00:10:09.335 Data Units Read: 2206 00:10:09.335 Data Units Written: 1886 00:10:09.335 Host Read Commands: 102839 00:10:09.335 Host Write Commands: 98609 00:10:09.335 Controller Busy Time: 0 minutes 00:10:09.335 Power Cycles: 0 00:10:09.335 Power On Hours: 0 hours 00:10:09.335 Unsafe Shutdowns: 0 00:10:09.335 Unrecoverable Media Errors: 0 00:10:09.335 Lifetime Error Log Entries: 0 00:10:09.335 Warning Temperature Time: 0 minutes 00:10:09.335 Critical Temperature Time: 0 minutes 00:10:09.335 00:10:09.335 Number of Queues 00:10:09.335 ================ 00:10:09.335 Number of I/O Submission Queues: 64 00:10:09.335 Number of I/O Completion Queues: 64 00:10:09.335 00:10:09.335 ZNS Specific Controller Data 00:10:09.335 ============================ 00:10:09.335 Zone Append Size Limit: 0 00:10:09.335 00:10:09.335 00:10:09.335 Active Namespaces 00:10:09.335 ================= 00:10:09.335 Namespace ID:1 00:10:09.335 Error Recovery Timeout: Unlimited 00:10:09.335 Command Set Identifier: NVM (00h) 00:10:09.335 Deallocate: Supported 00:10:09.335 Deallocated/Unwritten Error: Supported 00:10:09.335 Deallocated Read Value: All 0x00 00:10:09.335 Deallocate in Write Zeroes: Not Supported 00:10:09.335 Deallocated Guard Field: 0xFFFF 00:10:09.335 Flush: Supported 00:10:09.335 Reservation: Not Supported 00:10:09.335 Namespace Sharing Capabilities: Private 00:10:09.335 Size (in LBAs): 1048576 (4GiB) 00:10:09.335 Capacity (in LBAs): 1048576 (4GiB) 00:10:09.335 Utilization (in LBAs): 1048576 (4GiB) 00:10:09.335 Thin Provisioning: Not Supported 00:10:09.335 Per-NS Atomic Units: No 00:10:09.335 Maximum Single Source Range Length: 128 00:10:09.335 Maximum Copy Length: 128 00:10:09.335 Maximum Source Range Count: 128 00:10:09.335 NGUID/EUI64 Never Reused: No 00:10:09.335 Namespace Write Protected: No 00:10:09.335 Number of LBA Formats: 8 00:10:09.335 Current LBA Format: LBA Format #04 00:10:09.335 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.335 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.335 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.335 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.335 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.335 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.335 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.335 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.335 00:10:09.335 Namespace ID:2 00:10:09.335 Error Recovery Timeout: Unlimited 00:10:09.335 Command Set Identifier: NVM (00h) 00:10:09.335 Deallocate: Supported 00:10:09.335 Deallocated/Unwritten Error: Supported 00:10:09.335 Deallocated Read Value: All 0x00 00:10:09.335 Deallocate in Write Zeroes: Not Supported 00:10:09.335 Deallocated Guard Field: 0xFFFF 00:10:09.335 Flush: Supported 00:10:09.335 Reservation: Not Supported 00:10:09.336 Namespace Sharing Capabilities: Private 00:10:09.336 Size (in LBAs): 1048576 (4GiB) 00:10:09.336 Capacity (in LBAs): 1048576 (4GiB) 00:10:09.336 Utilization (in LBAs): 1048576 (4GiB) 00:10:09.336 Thin Provisioning: Not Supported 00:10:09.336 Per-NS Atomic Units: No 00:10:09.336 Maximum Single Source Range Length: 128 00:10:09.336 Maximum Copy Length: 128 00:10:09.336 Maximum Source Range Count: 128 00:10:09.336 NGUID/EUI64 Never Reused: No 00:10:09.336 Namespace Write Protected: No 00:10:09.336 Number of LBA Formats: 8 00:10:09.336 Current LBA Format: LBA Format #04 00:10:09.336 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.336 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.336 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.336 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.336 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.336 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.336 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.336 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.336 00:10:09.336 Namespace ID:3 00:10:09.336 Error Recovery Timeout: Unlimited 00:10:09.336 Command Set Identifier: NVM (00h) 00:10:09.336 Deallocate: Supported 00:10:09.336 Deallocated/Unwritten Error: Supported 00:10:09.336 Deallocated Read Value: All 0x00 00:10:09.336 Deallocate in Write Zeroes: Not Supported 00:10:09.336 Deallocated Guard Field: 0xFFFF 00:10:09.336 Flush: Supported 00:10:09.336 Reservation: Not Supported 00:10:09.336 Namespace Sharing Capabilities: Private 00:10:09.336 Size (in LBAs): 1048576 (4GiB) 00:10:09.336 Capacity (in LBAs): 1048576 (4GiB) 00:10:09.336 Utilization (in LBAs): 1048576 (4GiB) 00:10:09.336 Thin Provisioning: Not Supported 00:10:09.336 Per-NS Atomic Units: No 00:10:09.336 Maximum Single Source Range Length: 128 00:10:09.336 Maximum Copy Length: 128 00:10:09.336 Maximum Source Range Count: 128 00:10:09.336 NGUID/EUI64 Never Reused: No 00:10:09.336 Namespace Write Protected: No 00:10:09.336 Number of LBA Formats: 8 00:10:09.336 Current LBA Format: LBA Format #04 00:10:09.336 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.336 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.336 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.336 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.336 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.336 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.336 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.336 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.336 00:10:09.336 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:09.336 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:09.596 ===================================================== 00:10:09.596 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:09.596 ===================================================== 00:10:09.596 Controller Capabilities/Features 00:10:09.596 ================================ 00:10:09.596 Vendor ID: 1b36 00:10:09.596 Subsystem Vendor ID: 1af4 00:10:09.596 Serial Number: 12340 00:10:09.596 Model Number: QEMU NVMe Ctrl 00:10:09.596 Firmware Version: 8.0.0 00:10:09.596 Recommended Arb Burst: 6 00:10:09.596 IEEE OUI Identifier: 00 54 52 00:10:09.596 Multi-path I/O 00:10:09.596 May have multiple subsystem ports: No 00:10:09.596 May have multiple controllers: No 00:10:09.596 Associated with SR-IOV VF: No 00:10:09.596 Max Data Transfer Size: 524288 00:10:09.596 Max Number of Namespaces: 256 00:10:09.596 Max Number of I/O Queues: 64 00:10:09.596 NVMe Specification Version (VS): 1.4 00:10:09.596 NVMe Specification Version (Identify): 1.4 00:10:09.596 Maximum Queue Entries: 2048 00:10:09.596 Contiguous Queues Required: Yes 00:10:09.596 Arbitration Mechanisms Supported 00:10:09.596 Weighted Round Robin: Not Supported 00:10:09.596 Vendor Specific: Not Supported 00:10:09.596 Reset Timeout: 7500 ms 00:10:09.596 Doorbell Stride: 4 bytes 00:10:09.596 NVM Subsystem Reset: Not Supported 00:10:09.596 Command Sets Supported 00:10:09.596 NVM Command Set: Supported 00:10:09.596 Boot Partition: Not Supported 00:10:09.596 Memory Page Size Minimum: 4096 bytes 00:10:09.596 Memory Page Size Maximum: 65536 bytes 00:10:09.596 Persistent Memory Region: Not Supported 00:10:09.596 Optional Asynchronous Events Supported 00:10:09.596 Namespace Attribute Notices: Supported 00:10:09.596 Firmware Activation Notices: Not Supported 00:10:09.596 ANA Change Notices: Not Supported 00:10:09.596 PLE Aggregate Log Change Notices: Not Supported 00:10:09.596 LBA Status Info Alert Notices: Not Supported 00:10:09.596 EGE Aggregate Log Change Notices: Not Supported 00:10:09.596 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.596 Zone Descriptor Change Notices: Not Supported 00:10:09.596 Discovery Log Change Notices: Not Supported 00:10:09.596 Controller Attributes 00:10:09.596 128-bit Host Identifier: Not Supported 00:10:09.596 Non-Operational Permissive Mode: Not Supported 00:10:09.596 NVM Sets: Not Supported 00:10:09.596 Read Recovery Levels: Not Supported 00:10:09.596 Endurance Groups: Not Supported 00:10:09.596 Predictable Latency Mode: Not Supported 00:10:09.596 Traffic Based Keep ALive: Not Supported 00:10:09.596 Namespace Granularity: Not Supported 00:10:09.596 SQ Associations: Not Supported 00:10:09.596 UUID List: Not Supported 00:10:09.596 Multi-Domain Subsystem: Not Supported 00:10:09.596 Fixed Capacity Management: Not Supported 00:10:09.596 Variable Capacity Management: Not Supported 00:10:09.596 Delete Endurance Group: Not Supported 00:10:09.596 Delete NVM Set: Not Supported 00:10:09.596 Extended LBA Formats Supported: Supported 00:10:09.596 Flexible Data Placement Supported: Not Supported 00:10:09.596 00:10:09.596 Controller Memory Buffer Support 00:10:09.596 ================================ 00:10:09.596 Supported: No 00:10:09.596 00:10:09.596 Persistent Memory Region Support 00:10:09.596 ================================ 00:10:09.596 Supported: No 00:10:09.596 00:10:09.596 Admin Command Set Attributes 00:10:09.596 ============================ 00:10:09.596 Security Send/Receive: Not Supported 00:10:09.596 Format NVM: Supported 00:10:09.596 Firmware Activate/Download: Not Supported 00:10:09.596 Namespace Management: Supported 00:10:09.596 Device Self-Test: Not Supported 00:10:09.596 Directives: Supported 00:10:09.596 NVMe-MI: Not Supported 00:10:09.596 Virtualization Management: Not Supported 00:10:09.596 Doorbell Buffer Config: Supported 00:10:09.596 Get LBA Status Capability: Not Supported 00:10:09.596 Command & Feature Lockdown Capability: Not Supported 00:10:09.596 Abort Command Limit: 4 00:10:09.596 Async Event Request Limit: 4 00:10:09.596 Number of Firmware Slots: N/A 00:10:09.596 Firmware Slot 1 Read-Only: N/A 00:10:09.596 Firmware Activation Without Reset: N/A 00:10:09.596 Multiple Update Detection Support: N/A 00:10:09.596 Firmware Update Granularity: No Information Provided 00:10:09.596 Per-Namespace SMART Log: Yes 00:10:09.596 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.596 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:09.596 Command Effects Log Page: Supported 00:10:09.596 Get Log Page Extended Data: Supported 00:10:09.596 Telemetry Log Pages: Not Supported 00:10:09.596 Persistent Event Log Pages: Not Supported 00:10:09.596 Supported Log Pages Log Page: May Support 00:10:09.596 Commands Supported & Effects Log Page: Not Supported 00:10:09.596 Feature Identifiers & Effects Log Page:May Support 00:10:09.596 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.596 Data Area 4 for Telemetry Log: Not Supported 00:10:09.596 Error Log Page Entries Supported: 1 00:10:09.596 Keep Alive: Not Supported 00:10:09.596 00:10:09.596 NVM Command Set Attributes 00:10:09.596 ========================== 00:10:09.596 Submission Queue Entry Size 00:10:09.596 Max: 64 00:10:09.596 Min: 64 00:10:09.596 Completion Queue Entry Size 00:10:09.596 Max: 16 00:10:09.596 Min: 16 00:10:09.596 Number of Namespaces: 256 00:10:09.596 Compare Command: Supported 00:10:09.596 Write Uncorrectable Command: Not Supported 00:10:09.596 Dataset Management Command: Supported 00:10:09.596 Write Zeroes Command: Supported 00:10:09.596 Set Features Save Field: Supported 00:10:09.596 Reservations: Not Supported 00:10:09.596 Timestamp: Supported 00:10:09.596 Copy: Supported 00:10:09.596 Volatile Write Cache: Present 00:10:09.596 Atomic Write Unit (Normal): 1 00:10:09.596 Atomic Write Unit (PFail): 1 00:10:09.596 Atomic Compare & Write Unit: 1 00:10:09.596 Fused Compare & Write: Not Supported 00:10:09.596 Scatter-Gather List 00:10:09.596 SGL Command Set: Supported 00:10:09.596 SGL Keyed: Not Supported 00:10:09.596 SGL Bit Bucket Descriptor: Not Supported 00:10:09.596 SGL Metadata Pointer: Not Supported 00:10:09.596 Oversized SGL: Not Supported 00:10:09.596 SGL Metadata Address: Not Supported 00:10:09.596 SGL Offset: Not Supported 00:10:09.596 Transport SGL Data Block: Not Supported 00:10:09.596 Replay Protected Memory Block: Not Supported 00:10:09.596 00:10:09.596 Firmware Slot Information 00:10:09.596 ========================= 00:10:09.596 Active slot: 1 00:10:09.596 Slot 1 Firmware Revision: 1.0 00:10:09.596 00:10:09.596 00:10:09.596 Commands Supported and Effects 00:10:09.596 ============================== 00:10:09.596 Admin Commands 00:10:09.596 -------------- 00:10:09.596 Delete I/O Submission Queue (00h): Supported 00:10:09.596 Create I/O Submission Queue (01h): Supported 00:10:09.596 Get Log Page (02h): Supported 00:10:09.596 Delete I/O Completion Queue (04h): Supported 00:10:09.596 Create I/O Completion Queue (05h): Supported 00:10:09.596 Identify (06h): Supported 00:10:09.596 Abort (08h): Supported 00:10:09.596 Set Features (09h): Supported 00:10:09.596 Get Features (0Ah): Supported 00:10:09.596 Asynchronous Event Request (0Ch): Supported 00:10:09.596 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.596 Directive Send (19h): Supported 00:10:09.596 Directive Receive (1Ah): Supported 00:10:09.596 Virtualization Management (1Ch): Supported 00:10:09.596 Doorbell Buffer Config (7Ch): Supported 00:10:09.596 Format NVM (80h): Supported LBA-Change 00:10:09.596 I/O Commands 00:10:09.596 ------------ 00:10:09.596 Flush (00h): Supported LBA-Change 00:10:09.596 Write (01h): Supported LBA-Change 00:10:09.596 Read (02h): Supported 00:10:09.596 Compare (05h): Supported 00:10:09.596 Write Zeroes (08h): Supported LBA-Change 00:10:09.596 Dataset Management (09h): Supported LBA-Change 00:10:09.596 Unknown (0Ch): Supported 00:10:09.596 Unknown (12h): Supported 00:10:09.596 Copy (19h): Supported LBA-Change 00:10:09.596 Unknown (1Dh): Supported LBA-Change 00:10:09.596 00:10:09.596 Error Log 00:10:09.596 ========= 00:10:09.596 00:10:09.596 Arbitration 00:10:09.596 =========== 00:10:09.596 Arbitration Burst: no limit 00:10:09.596 00:10:09.596 Power Management 00:10:09.596 ================ 00:10:09.596 Number of Power States: 1 00:10:09.596 Current Power State: Power State #0 00:10:09.596 Power State #0: 00:10:09.596 Max Power: 25.00 W 00:10:09.596 Non-Operational State: Operational 00:10:09.596 Entry Latency: 16 microseconds 00:10:09.596 Exit Latency: 4 microseconds 00:10:09.596 Relative Read Throughput: 0 00:10:09.596 Relative Read Latency: 0 00:10:09.596 Relative Write Throughput: 0 00:10:09.596 Relative Write Latency: 0 00:10:09.596 Idle Power: Not Reported 00:10:09.596 Active Power: Not Reported 00:10:09.596 Non-Operational Permissive Mode: Not Supported 00:10:09.596 00:10:09.596 Health Information 00:10:09.597 ================== 00:10:09.597 Critical Warnings: 00:10:09.597 Available Spare Space: OK 00:10:09.597 Temperature: OK 00:10:09.597 Device Reliability: OK 00:10:09.597 Read Only: No 00:10:09.597 Volatile Memory Backup: OK 00:10:09.597 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.597 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.597 Available Spare: 0% 00:10:09.597 Available Spare Threshold: 0% 00:10:09.597 Life Percentage Used: 0% 00:10:09.597 Data Units Read: 1025 00:10:09.597 Data Units Written: 857 00:10:09.597 Host Read Commands: 48828 00:10:09.597 Host Write Commands: 47322 00:10:09.597 Controller Busy Time: 0 minutes 00:10:09.597 Power Cycles: 0 00:10:09.597 Power On Hours: 0 hours 00:10:09.597 Unsafe Shutdowns: 0 00:10:09.597 Unrecoverable Media Errors: 0 00:10:09.597 Lifetime Error Log Entries: 0 00:10:09.597 Warning Temperature Time: 0 minutes 00:10:09.597 Critical Temperature Time: 0 minutes 00:10:09.597 00:10:09.597 Number of Queues 00:10:09.597 ================ 00:10:09.597 Number of I/O Submission Queues: 64 00:10:09.597 Number of I/O Completion Queues: 64 00:10:09.597 00:10:09.597 ZNS Specific Controller Data 00:10:09.597 ============================ 00:10:09.597 Zone Append Size Limit: 0 00:10:09.597 00:10:09.597 00:10:09.597 Active Namespaces 00:10:09.597 ================= 00:10:09.597 Namespace ID:1 00:10:09.597 Error Recovery Timeout: Unlimited 00:10:09.597 Command Set Identifier: NVM (00h) 00:10:09.597 Deallocate: Supported 00:10:09.597 Deallocated/Unwritten Error: Supported 00:10:09.597 Deallocated Read Value: All 0x00 00:10:09.597 Deallocate in Write Zeroes: Not Supported 00:10:09.597 Deallocated Guard Field: 0xFFFF 00:10:09.597 Flush: Supported 00:10:09.597 Reservation: Not Supported 00:10:09.597 Metadata Transferred as: Separate Metadata Buffer 00:10:09.597 Namespace Sharing Capabilities: Private 00:10:09.597 Size (in LBAs): 1548666 (5GiB) 00:10:09.597 Capacity (in LBAs): 1548666 (5GiB) 00:10:09.597 Utilization (in LBAs): 1548666 (5GiB) 00:10:09.597 Thin Provisioning: Not Supported 00:10:09.597 Per-NS Atomic Units: No 00:10:09.597 Maximum Single Source Range Length: 128 00:10:09.597 Maximum Copy Length: 128 00:10:09.597 Maximum Source Range Count: 128 00:10:09.597 NGUID/EUI64 Never Reused: No 00:10:09.597 Namespace Write Protected: No 00:10:09.597 Number of LBA Formats: 8 00:10:09.597 Current LBA Format: LBA Format #07 00:10:09.597 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.597 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.597 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.597 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.597 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.597 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.597 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.597 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.597 00:10:09.597 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:09.597 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:09.856 ===================================================== 00:10:09.856 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:09.856 ===================================================== 00:10:09.856 Controller Capabilities/Features 00:10:09.856 ================================ 00:10:09.856 Vendor ID: 1b36 00:10:09.856 Subsystem Vendor ID: 1af4 00:10:09.856 Serial Number: 12341 00:10:09.856 Model Number: QEMU NVMe Ctrl 00:10:09.856 Firmware Version: 8.0.0 00:10:09.856 Recommended Arb Burst: 6 00:10:09.856 IEEE OUI Identifier: 00 54 52 00:10:09.856 Multi-path I/O 00:10:09.856 May have multiple subsystem ports: No 00:10:09.856 May have multiple controllers: No 00:10:09.856 Associated with SR-IOV VF: No 00:10:09.856 Max Data Transfer Size: 524288 00:10:09.856 Max Number of Namespaces: 256 00:10:09.856 Max Number of I/O Queues: 64 00:10:09.856 NVMe Specification Version (VS): 1.4 00:10:09.856 NVMe Specification Version (Identify): 1.4 00:10:09.856 Maximum Queue Entries: 2048 00:10:09.856 Contiguous Queues Required: Yes 00:10:09.856 Arbitration Mechanisms Supported 00:10:09.856 Weighted Round Robin: Not Supported 00:10:09.856 Vendor Specific: Not Supported 00:10:09.856 Reset Timeout: 7500 ms 00:10:09.856 Doorbell Stride: 4 bytes 00:10:09.856 NVM Subsystem Reset: Not Supported 00:10:09.856 Command Sets Supported 00:10:09.856 NVM Command Set: Supported 00:10:09.856 Boot Partition: Not Supported 00:10:09.856 Memory Page Size Minimum: 4096 bytes 00:10:09.856 Memory Page Size Maximum: 65536 bytes 00:10:09.856 Persistent Memory Region: Not Supported 00:10:09.856 Optional Asynchronous Events Supported 00:10:09.856 Namespace Attribute Notices: Supported 00:10:09.856 Firmware Activation Notices: Not Supported 00:10:09.856 ANA Change Notices: Not Supported 00:10:09.856 PLE Aggregate Log Change Notices: Not Supported 00:10:09.856 LBA Status Info Alert Notices: Not Supported 00:10:09.856 EGE Aggregate Log Change Notices: Not Supported 00:10:09.856 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.856 Zone Descriptor Change Notices: Not Supported 00:10:09.856 Discovery Log Change Notices: Not Supported 00:10:09.856 Controller Attributes 00:10:09.856 128-bit Host Identifier: Not Supported 00:10:09.856 Non-Operational Permissive Mode: Not Supported 00:10:09.856 NVM Sets: Not Supported 00:10:09.856 Read Recovery Levels: Not Supported 00:10:09.857 Endurance Groups: Not Supported 00:10:09.857 Predictable Latency Mode: Not Supported 00:10:09.857 Traffic Based Keep ALive: Not Supported 00:10:09.857 Namespace Granularity: Not Supported 00:10:09.857 SQ Associations: Not Supported 00:10:09.857 UUID List: Not Supported 00:10:09.857 Multi-Domain Subsystem: Not Supported 00:10:09.857 Fixed Capacity Management: Not Supported 00:10:09.857 Variable Capacity Management: Not Supported 00:10:09.857 Delete Endurance Group: Not Supported 00:10:09.857 Delete NVM Set: Not Supported 00:10:09.857 Extended LBA Formats Supported: Supported 00:10:09.857 Flexible Data Placement Supported: Not Supported 00:10:09.857 00:10:09.857 Controller Memory Buffer Support 00:10:09.857 ================================ 00:10:09.857 Supported: No 00:10:09.857 00:10:09.857 Persistent Memory Region Support 00:10:09.857 ================================ 00:10:09.857 Supported: No 00:10:09.857 00:10:09.857 Admin Command Set Attributes 00:10:09.857 ============================ 00:10:09.857 Security Send/Receive: Not Supported 00:10:09.857 Format NVM: Supported 00:10:09.857 Firmware Activate/Download: Not Supported 00:10:09.857 Namespace Management: Supported 00:10:09.857 Device Self-Test: Not Supported 00:10:09.857 Directives: Supported 00:10:09.857 NVMe-MI: Not Supported 00:10:09.857 Virtualization Management: Not Supported 00:10:09.857 Doorbell Buffer Config: Supported 00:10:09.857 Get LBA Status Capability: Not Supported 00:10:09.857 Command & Feature Lockdown Capability: Not Supported 00:10:09.857 Abort Command Limit: 4 00:10:09.857 Async Event Request Limit: 4 00:10:09.857 Number of Firmware Slots: N/A 00:10:09.857 Firmware Slot 1 Read-Only: N/A 00:10:09.857 Firmware Activation Without Reset: N/A 00:10:09.857 Multiple Update Detection Support: N/A 00:10:09.857 Firmware Update Granularity: No Information Provided 00:10:09.857 Per-Namespace SMART Log: Yes 00:10:09.857 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.857 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:09.857 Command Effects Log Page: Supported 00:10:09.857 Get Log Page Extended Data: Supported 00:10:09.857 Telemetry Log Pages: Not Supported 00:10:09.857 Persistent Event Log Pages: Not Supported 00:10:09.857 Supported Log Pages Log Page: May Support 00:10:09.857 Commands Supported & Effects Log Page: Not Supported 00:10:09.857 Feature Identifiers & Effects Log Page:May Support 00:10:09.857 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.857 Data Area 4 for Telemetry Log: Not Supported 00:10:09.857 Error Log Page Entries Supported: 1 00:10:09.857 Keep Alive: Not Supported 00:10:09.857 00:10:09.857 NVM Command Set Attributes 00:10:09.857 ========================== 00:10:09.857 Submission Queue Entry Size 00:10:09.857 Max: 64 00:10:09.857 Min: 64 00:10:09.857 Completion Queue Entry Size 00:10:09.857 Max: 16 00:10:09.857 Min: 16 00:10:09.857 Number of Namespaces: 256 00:10:09.857 Compare Command: Supported 00:10:09.857 Write Uncorrectable Command: Not Supported 00:10:09.857 Dataset Management Command: Supported 00:10:09.857 Write Zeroes Command: Supported 00:10:09.857 Set Features Save Field: Supported 00:10:09.857 Reservations: Not Supported 00:10:09.857 Timestamp: Supported 00:10:09.857 Copy: Supported 00:10:09.857 Volatile Write Cache: Present 00:10:09.857 Atomic Write Unit (Normal): 1 00:10:09.857 Atomic Write Unit (PFail): 1 00:10:09.857 Atomic Compare & Write Unit: 1 00:10:09.857 Fused Compare & Write: Not Supported 00:10:09.857 Scatter-Gather List 00:10:09.857 SGL Command Set: Supported 00:10:09.857 SGL Keyed: Not Supported 00:10:09.857 SGL Bit Bucket Descriptor: Not Supported 00:10:09.857 SGL Metadata Pointer: Not Supported 00:10:09.857 Oversized SGL: Not Supported 00:10:09.857 SGL Metadata Address: Not Supported 00:10:09.857 SGL Offset: Not Supported 00:10:09.857 Transport SGL Data Block: Not Supported 00:10:09.857 Replay Protected Memory Block: Not Supported 00:10:09.857 00:10:09.857 Firmware Slot Information 00:10:09.857 ========================= 00:10:09.857 Active slot: 1 00:10:09.857 Slot 1 Firmware Revision: 1.0 00:10:09.857 00:10:09.857 00:10:09.857 Commands Supported and Effects 00:10:09.857 ============================== 00:10:09.857 Admin Commands 00:10:09.857 -------------- 00:10:09.857 Delete I/O Submission Queue (00h): Supported 00:10:09.857 Create I/O Submission Queue (01h): Supported 00:10:09.857 Get Log Page (02h): Supported 00:10:09.857 Delete I/O Completion Queue (04h): Supported 00:10:09.857 Create I/O Completion Queue (05h): Supported 00:10:09.857 Identify (06h): Supported 00:10:09.857 Abort (08h): Supported 00:10:09.857 Set Features (09h): Supported 00:10:09.857 Get Features (0Ah): Supported 00:10:09.857 Asynchronous Event Request (0Ch): Supported 00:10:09.857 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:09.857 Directive Send (19h): Supported 00:10:09.857 Directive Receive (1Ah): Supported 00:10:09.857 Virtualization Management (1Ch): Supported 00:10:09.857 Doorbell Buffer Config (7Ch): Supported 00:10:09.857 Format NVM (80h): Supported LBA-Change 00:10:09.857 I/O Commands 00:10:09.857 ------------ 00:10:09.857 Flush (00h): Supported LBA-Change 00:10:09.857 Write (01h): Supported LBA-Change 00:10:09.857 Read (02h): Supported 00:10:09.857 Compare (05h): Supported 00:10:09.857 Write Zeroes (08h): Supported LBA-Change 00:10:09.857 Dataset Management (09h): Supported LBA-Change 00:10:09.857 Unknown (0Ch): Supported 00:10:09.857 Unknown (12h): Supported 00:10:09.857 Copy (19h): Supported LBA-Change 00:10:09.857 Unknown (1Dh): Supported LBA-Change 00:10:09.857 00:10:09.857 Error Log 00:10:09.857 ========= 00:10:09.857 00:10:09.857 Arbitration 00:10:09.857 =========== 00:10:09.857 Arbitration Burst: no limit 00:10:09.857 00:10:09.857 Power Management 00:10:09.857 ================ 00:10:09.857 Number of Power States: 1 00:10:09.857 Current Power State: Power State #0 00:10:09.857 Power State #0: 00:10:09.857 Max Power: 25.00 W 00:10:09.857 Non-Operational State: Operational 00:10:09.857 Entry Latency: 16 microseconds 00:10:09.857 Exit Latency: 4 microseconds 00:10:09.857 Relative Read Throughput: 0 00:10:09.857 Relative Read Latency: 0 00:10:09.857 Relative Write Throughput: 0 00:10:09.857 Relative Write Latency: 0 00:10:09.857 Idle Power: Not Reported 00:10:09.857 Active Power: Not Reported 00:10:09.857 Non-Operational Permissive Mode: Not Supported 00:10:09.857 00:10:09.857 Health Information 00:10:09.857 ================== 00:10:09.857 Critical Warnings: 00:10:09.857 Available Spare Space: OK 00:10:09.857 Temperature: OK 00:10:09.857 Device Reliability: OK 00:10:09.857 Read Only: No 00:10:09.857 Volatile Memory Backup: OK 00:10:09.857 Current Temperature: 323 Kelvin (50 Celsius) 00:10:09.857 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:09.857 Available Spare: 0% 00:10:09.857 Available Spare Threshold: 0% 00:10:09.857 Life Percentage Used: 0% 00:10:09.857 Data Units Read: 762 00:10:09.857 Data Units Written: 613 00:10:09.858 Host Read Commands: 34902 00:10:09.858 Host Write Commands: 32675 00:10:09.858 Controller Busy Time: 0 minutes 00:10:09.858 Power Cycles: 0 00:10:09.858 Power On Hours: 0 hours 00:10:09.858 Unsafe Shutdowns: 0 00:10:09.858 Unrecoverable Media Errors: 0 00:10:09.858 Lifetime Error Log Entries: 0 00:10:09.858 Warning Temperature Time: 0 minutes 00:10:09.858 Critical Temperature Time: 0 minutes 00:10:09.858 00:10:09.858 Number of Queues 00:10:09.858 ================ 00:10:09.858 Number of I/O Submission Queues: 64 00:10:09.858 Number of I/O Completion Queues: 64 00:10:09.858 00:10:09.858 ZNS Specific Controller Data 00:10:09.858 ============================ 00:10:09.858 Zone Append Size Limit: 0 00:10:09.858 00:10:09.858 00:10:09.858 Active Namespaces 00:10:09.858 ================= 00:10:09.858 Namespace ID:1 00:10:09.858 Error Recovery Timeout: Unlimited 00:10:09.858 Command Set Identifier: NVM (00h) 00:10:09.858 Deallocate: Supported 00:10:09.858 Deallocated/Unwritten Error: Supported 00:10:09.858 Deallocated Read Value: All 0x00 00:10:09.858 Deallocate in Write Zeroes: Not Supported 00:10:09.858 Deallocated Guard Field: 0xFFFF 00:10:09.858 Flush: Supported 00:10:09.858 Reservation: Not Supported 00:10:09.858 Namespace Sharing Capabilities: Private 00:10:09.858 Size (in LBAs): 1310720 (5GiB) 00:10:09.858 Capacity (in LBAs): 1310720 (5GiB) 00:10:09.858 Utilization (in LBAs): 1310720 (5GiB) 00:10:09.858 Thin Provisioning: Not Supported 00:10:09.858 Per-NS Atomic Units: No 00:10:09.858 Maximum Single Source Range Length: 128 00:10:09.858 Maximum Copy Length: 128 00:10:09.858 Maximum Source Range Count: 128 00:10:09.858 NGUID/EUI64 Never Reused: No 00:10:09.858 Namespace Write Protected: No 00:10:09.858 Number of LBA Formats: 8 00:10:09.858 Current LBA Format: LBA Format #04 00:10:09.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.858 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:09.858 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:09.858 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:09.858 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:09.858 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:09.858 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:09.858 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:09.858 00:10:09.858 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:09.858 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:10.117 ===================================================== 00:10:10.117 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:10.117 ===================================================== 00:10:10.117 Controller Capabilities/Features 00:10:10.117 ================================ 00:10:10.117 Vendor ID: 1b36 00:10:10.117 Subsystem Vendor ID: 1af4 00:10:10.117 Serial Number: 12342 00:10:10.117 Model Number: QEMU NVMe Ctrl 00:10:10.117 Firmware Version: 8.0.0 00:10:10.117 Recommended Arb Burst: 6 00:10:10.117 IEEE OUI Identifier: 00 54 52 00:10:10.117 Multi-path I/O 00:10:10.117 May have multiple subsystem ports: No 00:10:10.117 May have multiple controllers: No 00:10:10.117 Associated with SR-IOV VF: No 00:10:10.117 Max Data Transfer Size: 524288 00:10:10.117 Max Number of Namespaces: 256 00:10:10.117 Max Number of I/O Queues: 64 00:10:10.117 NVMe Specification Version (VS): 1.4 00:10:10.117 NVMe Specification Version (Identify): 1.4 00:10:10.117 Maximum Queue Entries: 2048 00:10:10.117 Contiguous Queues Required: Yes 00:10:10.117 Arbitration Mechanisms Supported 00:10:10.117 Weighted Round Robin: Not Supported 00:10:10.117 Vendor Specific: Not Supported 00:10:10.117 Reset Timeout: 7500 ms 00:10:10.117 Doorbell Stride: 4 bytes 00:10:10.117 NVM Subsystem Reset: Not Supported 00:10:10.117 Command Sets Supported 00:10:10.117 NVM Command Set: Supported 00:10:10.117 Boot Partition: Not Supported 00:10:10.117 Memory Page Size Minimum: 4096 bytes 00:10:10.117 Memory Page Size Maximum: 65536 bytes 00:10:10.117 Persistent Memory Region: Not Supported 00:10:10.117 Optional Asynchronous Events Supported 00:10:10.117 Namespace Attribute Notices: Supported 00:10:10.117 Firmware Activation Notices: Not Supported 00:10:10.117 ANA Change Notices: Not Supported 00:10:10.117 PLE Aggregate Log Change Notices: Not Supported 00:10:10.117 LBA Status Info Alert Notices: Not Supported 00:10:10.117 EGE Aggregate Log Change Notices: Not Supported 00:10:10.117 Normal NVM Subsystem Shutdown event: Not Supported 00:10:10.117 Zone Descriptor Change Notices: Not Supported 00:10:10.117 Discovery Log Change Notices: Not Supported 00:10:10.117 Controller Attributes 00:10:10.117 128-bit Host Identifier: Not Supported 00:10:10.117 Non-Operational Permissive Mode: Not Supported 00:10:10.117 NVM Sets: Not Supported 00:10:10.117 Read Recovery Levels: Not Supported 00:10:10.117 Endurance Groups: Not Supported 00:10:10.117 Predictable Latency Mode: Not Supported 00:10:10.117 Traffic Based Keep ALive: Not Supported 00:10:10.117 Namespace Granularity: Not Supported 00:10:10.117 SQ Associations: Not Supported 00:10:10.117 UUID List: Not Supported 00:10:10.117 Multi-Domain Subsystem: Not Supported 00:10:10.117 Fixed Capacity Management: Not Supported 00:10:10.117 Variable Capacity Management: Not Supported 00:10:10.117 Delete Endurance Group: Not Supported 00:10:10.117 Delete NVM Set: Not Supported 00:10:10.117 Extended LBA Formats Supported: Supported 00:10:10.117 Flexible Data Placement Supported: Not Supported 00:10:10.117 00:10:10.117 Controller Memory Buffer Support 00:10:10.117 ================================ 00:10:10.117 Supported: No 00:10:10.117 00:10:10.117 Persistent Memory Region Support 00:10:10.117 ================================ 00:10:10.117 Supported: No 00:10:10.117 00:10:10.117 Admin Command Set Attributes 00:10:10.117 ============================ 00:10:10.117 Security Send/Receive: Not Supported 00:10:10.117 Format NVM: Supported 00:10:10.117 Firmware Activate/Download: Not Supported 00:10:10.117 Namespace Management: Supported 00:10:10.117 Device Self-Test: Not Supported 00:10:10.117 Directives: Supported 00:10:10.117 NVMe-MI: Not Supported 00:10:10.117 Virtualization Management: Not Supported 00:10:10.117 Doorbell Buffer Config: Supported 00:10:10.117 Get LBA Status Capability: Not Supported 00:10:10.117 Command & Feature Lockdown Capability: Not Supported 00:10:10.117 Abort Command Limit: 4 00:10:10.117 Async Event Request Limit: 4 00:10:10.117 Number of Firmware Slots: N/A 00:10:10.117 Firmware Slot 1 Read-Only: N/A 00:10:10.117 Firmware Activation Without Reset: N/A 00:10:10.118 Multiple Update Detection Support: N/A 00:10:10.118 Firmware Update Granularity: No Information Provided 00:10:10.118 Per-Namespace SMART Log: Yes 00:10:10.118 Asymmetric Namespace Access Log Page: Not Supported 00:10:10.118 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:10.118 Command Effects Log Page: Supported 00:10:10.118 Get Log Page Extended Data: Supported 00:10:10.118 Telemetry Log Pages: Not Supported 00:10:10.118 Persistent Event Log Pages: Not Supported 00:10:10.118 Supported Log Pages Log Page: May Support 00:10:10.118 Commands Supported & Effects Log Page: Not Supported 00:10:10.118 Feature Identifiers & Effects Log Page:May Support 00:10:10.118 NVMe-MI Commands & Effects Log Page: May Support 00:10:10.118 Data Area 4 for Telemetry Log: Not Supported 00:10:10.118 Error Log Page Entries Supported: 1 00:10:10.118 Keep Alive: Not Supported 00:10:10.118 00:10:10.118 NVM Command Set Attributes 00:10:10.118 ========================== 00:10:10.118 Submission Queue Entry Size 00:10:10.118 Max: 64 00:10:10.118 Min: 64 00:10:10.118 Completion Queue Entry Size 00:10:10.118 Max: 16 00:10:10.118 Min: 16 00:10:10.118 Number of Namespaces: 256 00:10:10.118 Compare Command: Supported 00:10:10.118 Write Uncorrectable Command: Not Supported 00:10:10.118 Dataset Management Command: Supported 00:10:10.118 Write Zeroes Command: Supported 00:10:10.118 Set Features Save Field: Supported 00:10:10.118 Reservations: Not Supported 00:10:10.118 Timestamp: Supported 00:10:10.118 Copy: Supported 00:10:10.118 Volatile Write Cache: Present 00:10:10.118 Atomic Write Unit (Normal): 1 00:10:10.118 Atomic Write Unit (PFail): 1 00:10:10.118 Atomic Compare & Write Unit: 1 00:10:10.118 Fused Compare & Write: Not Supported 00:10:10.118 Scatter-Gather List 00:10:10.118 SGL Command Set: Supported 00:10:10.118 SGL Keyed: Not Supported 00:10:10.118 SGL Bit Bucket Descriptor: Not Supported 00:10:10.118 SGL Metadata Pointer: Not Supported 00:10:10.118 Oversized SGL: Not Supported 00:10:10.118 SGL Metadata Address: Not Supported 00:10:10.118 SGL Offset: Not Supported 00:10:10.118 Transport SGL Data Block: Not Supported 00:10:10.118 Replay Protected Memory Block: Not Supported 00:10:10.118 00:10:10.118 Firmware Slot Information 00:10:10.118 ========================= 00:10:10.118 Active slot: 1 00:10:10.118 Slot 1 Firmware Revision: 1.0 00:10:10.118 00:10:10.118 00:10:10.118 Commands Supported and Effects 00:10:10.118 ============================== 00:10:10.118 Admin Commands 00:10:10.118 -------------- 00:10:10.118 Delete I/O Submission Queue (00h): Supported 00:10:10.118 Create I/O Submission Queue (01h): Supported 00:10:10.118 Get Log Page (02h): Supported 00:10:10.118 Delete I/O Completion Queue (04h): Supported 00:10:10.118 Create I/O Completion Queue (05h): Supported 00:10:10.118 Identify (06h): Supported 00:10:10.118 Abort (08h): Supported 00:10:10.118 Set Features (09h): Supported 00:10:10.118 Get Features (0Ah): Supported 00:10:10.118 Asynchronous Event Request (0Ch): Supported 00:10:10.118 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:10.118 Directive Send (19h): Supported 00:10:10.118 Directive Receive (1Ah): Supported 00:10:10.118 Virtualization Management (1Ch): Supported 00:10:10.118 Doorbell Buffer Config (7Ch): Supported 00:10:10.118 Format NVM (80h): Supported LBA-Change 00:10:10.118 I/O Commands 00:10:10.118 ------------ 00:10:10.118 Flush (00h): Supported LBA-Change 00:10:10.118 Write (01h): Supported LBA-Change 00:10:10.118 Read (02h): Supported 00:10:10.118 Compare (05h): Supported 00:10:10.118 Write Zeroes (08h): Supported LBA-Change 00:10:10.118 Dataset Management (09h): Supported LBA-Change 00:10:10.118 Unknown (0Ch): Supported 00:10:10.118 Unknown (12h): Supported 00:10:10.118 Copy (19h): Supported LBA-Change 00:10:10.118 Unknown (1Dh): Supported LBA-Change 00:10:10.118 00:10:10.118 Error Log 00:10:10.118 ========= 00:10:10.118 00:10:10.118 Arbitration 00:10:10.118 =========== 00:10:10.118 Arbitration Burst: no limit 00:10:10.118 00:10:10.118 Power Management 00:10:10.118 ================ 00:10:10.118 Number of Power States: 1 00:10:10.118 Current Power State: Power State #0 00:10:10.118 Power State #0: 00:10:10.118 Max Power: 25.00 W 00:10:10.118 Non-Operational State: Operational 00:10:10.118 Entry Latency: 16 microseconds 00:10:10.118 Exit Latency: 4 microseconds 00:10:10.118 Relative Read Throughput: 0 00:10:10.118 Relative Read Latency: 0 00:10:10.118 Relative Write Throughput: 0 00:10:10.118 Relative Write Latency: 0 00:10:10.118 Idle Power: Not Reported 00:10:10.118 Active Power: Not Reported 00:10:10.118 Non-Operational Permissive Mode: Not Supported 00:10:10.118 00:10:10.118 Health Information 00:10:10.118 ================== 00:10:10.118 Critical Warnings: 00:10:10.118 Available Spare Space: OK 00:10:10.118 Temperature: OK 00:10:10.118 Device Reliability: OK 00:10:10.118 Read Only: No 00:10:10.118 Volatile Memory Backup: OK 00:10:10.118 Current Temperature: 323 Kelvin (50 Celsius) 00:10:10.118 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:10.118 Available Spare: 0% 00:10:10.118 Available Spare Threshold: 0% 00:10:10.118 Life Percentage Used: 0% 00:10:10.118 Data Units Read: 2206 00:10:10.118 Data Units Written: 1886 00:10:10.118 Host Read Commands: 102839 00:10:10.118 Host Write Commands: 98609 00:10:10.118 Controller Busy Time: 0 minutes 00:10:10.118 Power Cycles: 0 00:10:10.118 Power On Hours: 0 hours 00:10:10.118 Unsafe Shutdowns: 0 00:10:10.118 Unrecoverable Media Errors: 0 00:10:10.118 Lifetime Error Log Entries: 0 00:10:10.118 Warning Temperature Time: 0 minutes 00:10:10.118 Critical Temperature Time: 0 minutes 00:10:10.118 00:10:10.118 Number of Queues 00:10:10.118 ================ 00:10:10.118 Number of I/O Submission Queues: 64 00:10:10.118 Number of I/O Completion Queues: 64 00:10:10.118 00:10:10.118 ZNS Specific Controller Data 00:10:10.118 ============================ 00:10:10.118 Zone Append Size Limit: 0 00:10:10.118 00:10:10.118 00:10:10.118 Active Namespaces 00:10:10.118 ================= 00:10:10.118 Namespace ID:1 00:10:10.118 Error Recovery Timeout: Unlimited 00:10:10.118 Command Set Identifier: NVM (00h) 00:10:10.118 Deallocate: Supported 00:10:10.118 Deallocated/Unwritten Error: Supported 00:10:10.118 Deallocated Read Value: All 0x00 00:10:10.118 Deallocate in Write Zeroes: Not Supported 00:10:10.118 Deallocated Guard Field: 0xFFFF 00:10:10.118 Flush: Supported 00:10:10.118 Reservation: Not Supported 00:10:10.118 Namespace Sharing Capabilities: Private 00:10:10.118 Size (in LBAs): 1048576 (4GiB) 00:10:10.118 Capacity (in LBAs): 1048576 (4GiB) 00:10:10.118 Utilization (in LBAs): 1048576 (4GiB) 00:10:10.118 Thin Provisioning: Not Supported 00:10:10.118 Per-NS Atomic Units: No 00:10:10.118 Maximum Single Source Range Length: 128 00:10:10.118 Maximum Copy Length: 128 00:10:10.118 Maximum Source Range Count: 128 00:10:10.118 NGUID/EUI64 Never Reused: No 00:10:10.118 Namespace Write Protected: No 00:10:10.118 Number of LBA Formats: 8 00:10:10.118 Current LBA Format: LBA Format #04 00:10:10.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.118 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:10.118 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:10.118 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:10.118 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:10.118 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:10.118 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:10.118 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:10.118 00:10:10.118 Namespace ID:2 00:10:10.118 Error Recovery Timeout: Unlimited 00:10:10.118 Command Set Identifier: NVM (00h) 00:10:10.118 Deallocate: Supported 00:10:10.118 Deallocated/Unwritten Error: Supported 00:10:10.118 Deallocated Read Value: All 0x00 00:10:10.118 Deallocate in Write Zeroes: Not Supported 00:10:10.119 Deallocated Guard Field: 0xFFFF 00:10:10.119 Flush: Supported 00:10:10.119 Reservation: Not Supported 00:10:10.119 Namespace Sharing Capabilities: Private 00:10:10.119 Size (in LBAs): 1048576 (4GiB) 00:10:10.119 Capacity (in LBAs): 1048576 (4GiB) 00:10:10.119 Utilization (in LBAs): 1048576 (4GiB) 00:10:10.119 Thin Provisioning: Not Supported 00:10:10.119 Per-NS Atomic Units: No 00:10:10.119 Maximum Single Source Range Length: 128 00:10:10.119 Maximum Copy Length: 128 00:10:10.119 Maximum Source Range Count: 128 00:10:10.119 NGUID/EUI64 Never Reused: No 00:10:10.119 Namespace Write Protected: No 00:10:10.119 Number of LBA Formats: 8 00:10:10.119 Current LBA Format: LBA Format #04 00:10:10.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.119 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:10.119 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:10.119 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:10.119 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:10.119 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:10.119 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:10.119 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:10.119 00:10:10.119 Namespace ID:3 00:10:10.119 Error Recovery Timeout: Unlimited 00:10:10.119 Command Set Identifier: NVM (00h) 00:10:10.119 Deallocate: Supported 00:10:10.119 Deallocated/Unwritten Error: Supported 00:10:10.119 Deallocated Read Value: All 0x00 00:10:10.119 Deallocate in Write Zeroes: Not Supported 00:10:10.119 Deallocated Guard Field: 0xFFFF 00:10:10.119 Flush: Supported 00:10:10.119 Reservation: Not Supported 00:10:10.119 Namespace Sharing Capabilities: Private 00:10:10.119 Size (in LBAs): 1048576 (4GiB) 00:10:10.119 Capacity (in LBAs): 1048576 (4GiB) 00:10:10.119 Utilization (in LBAs): 1048576 (4GiB) 00:10:10.119 Thin Provisioning: Not Supported 00:10:10.119 Per-NS Atomic Units: No 00:10:10.119 Maximum Single Source Range Length: 128 00:10:10.119 Maximum Copy Length: 128 00:10:10.119 Maximum Source Range Count: 128 00:10:10.119 NGUID/EUI64 Never Reused: No 00:10:10.119 Namespace Write Protected: No 00:10:10.119 Number of LBA Formats: 8 00:10:10.119 Current LBA Format: LBA Format #04 00:10:10.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.119 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:10.119 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:10.119 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:10.119 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:10.119 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:10.119 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:10.119 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:10.119 00:10:10.119 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:10.119 13:09:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:10.378 ===================================================== 00:10:10.378 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:10.378 ===================================================== 00:10:10.378 Controller Capabilities/Features 00:10:10.378 ================================ 00:10:10.378 Vendor ID: 1b36 00:10:10.378 Subsystem Vendor ID: 1af4 00:10:10.378 Serial Number: 12343 00:10:10.378 Model Number: QEMU NVMe Ctrl 00:10:10.378 Firmware Version: 8.0.0 00:10:10.378 Recommended Arb Burst: 6 00:10:10.378 IEEE OUI Identifier: 00 54 52 00:10:10.378 Multi-path I/O 00:10:10.378 May have multiple subsystem ports: No 00:10:10.378 May have multiple controllers: Yes 00:10:10.378 Associated with SR-IOV VF: No 00:10:10.378 Max Data Transfer Size: 524288 00:10:10.378 Max Number of Namespaces: 256 00:10:10.378 Max Number of I/O Queues: 64 00:10:10.378 NVMe Specification Version (VS): 1.4 00:10:10.378 NVMe Specification Version (Identify): 1.4 00:10:10.378 Maximum Queue Entries: 2048 00:10:10.378 Contiguous Queues Required: Yes 00:10:10.378 Arbitration Mechanisms Supported 00:10:10.378 Weighted Round Robin: Not Supported 00:10:10.378 Vendor Specific: Not Supported 00:10:10.378 Reset Timeout: 7500 ms 00:10:10.378 Doorbell Stride: 4 bytes 00:10:10.378 NVM Subsystem Reset: Not Supported 00:10:10.378 Command Sets Supported 00:10:10.378 NVM Command Set: Supported 00:10:10.378 Boot Partition: Not Supported 00:10:10.378 Memory Page Size Minimum: 4096 bytes 00:10:10.378 Memory Page Size Maximum: 65536 bytes 00:10:10.378 Persistent Memory Region: Not Supported 00:10:10.378 Optional Asynchronous Events Supported 00:10:10.378 Namespace Attribute Notices: Supported 00:10:10.378 Firmware Activation Notices: Not Supported 00:10:10.378 ANA Change Notices: Not Supported 00:10:10.378 PLE Aggregate Log Change Notices: Not Supported 00:10:10.378 LBA Status Info Alert Notices: Not Supported 00:10:10.378 EGE Aggregate Log Change Notices: Not Supported 00:10:10.378 Normal NVM Subsystem Shutdown event: Not Supported 00:10:10.378 Zone Descriptor Change Notices: Not Supported 00:10:10.378 Discovery Log Change Notices: Not Supported 00:10:10.378 Controller Attributes 00:10:10.378 128-bit Host Identifier: Not Supported 00:10:10.378 Non-Operational Permissive Mode: Not Supported 00:10:10.378 NVM Sets: Not Supported 00:10:10.378 Read Recovery Levels: Not Supported 00:10:10.378 Endurance Groups: Supported 00:10:10.378 Predictable Latency Mode: Not Supported 00:10:10.378 Traffic Based Keep ALive: Not Supported 00:10:10.378 Namespace Granularity: Not Supported 00:10:10.378 SQ Associations: Not Supported 00:10:10.378 UUID List: Not Supported 00:10:10.378 Multi-Domain Subsystem: Not Supported 00:10:10.378 Fixed Capacity Management: Not Supported 00:10:10.378 Variable Capacity Management: Not Supported 00:10:10.378 Delete Endurance Group: Not Supported 00:10:10.378 Delete NVM Set: Not Supported 00:10:10.378 Extended LBA Formats Supported: Supported 00:10:10.378 Flexible Data Placement Supported: Supported 00:10:10.378 00:10:10.378 Controller Memory Buffer Support 00:10:10.378 ================================ 00:10:10.378 Supported: No 00:10:10.378 00:10:10.378 Persistent Memory Region Support 00:10:10.378 ================================ 00:10:10.378 Supported: No 00:10:10.378 00:10:10.378 Admin Command Set Attributes 00:10:10.378 ============================ 00:10:10.378 Security Send/Receive: Not Supported 00:10:10.378 Format NVM: Supported 00:10:10.378 Firmware Activate/Download: Not Supported 00:10:10.378 Namespace Management: Supported 00:10:10.378 Device Self-Test: Not Supported 00:10:10.378 Directives: Supported 00:10:10.378 NVMe-MI: Not Supported 00:10:10.378 Virtualization Management: Not Supported 00:10:10.378 Doorbell Buffer Config: Supported 00:10:10.378 Get LBA Status Capability: Not Supported 00:10:10.378 Command & Feature Lockdown Capability: Not Supported 00:10:10.378 Abort Command Limit: 4 00:10:10.378 Async Event Request Limit: 4 00:10:10.378 Number of Firmware Slots: N/A 00:10:10.378 Firmware Slot 1 Read-Only: N/A 00:10:10.378 Firmware Activation Without Reset: N/A 00:10:10.378 Multiple Update Detection Support: N/A 00:10:10.378 Firmware Update Granularity: No Information Provided 00:10:10.378 Per-Namespace SMART Log: Yes 00:10:10.378 Asymmetric Namespace Access Log Page: Not Supported 00:10:10.378 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:10.378 Command Effects Log Page: Supported 00:10:10.378 Get Log Page Extended Data: Supported 00:10:10.378 Telemetry Log Pages: Not Supported 00:10:10.378 Persistent Event Log Pages: Not Supported 00:10:10.378 Supported Log Pages Log Page: May Support 00:10:10.378 Commands Supported & Effects Log Page: Not Supported 00:10:10.378 Feature Identifiers & Effects Log Page:May Support 00:10:10.378 NVMe-MI Commands & Effects Log Page: May Support 00:10:10.378 Data Area 4 for Telemetry Log: Not Supported 00:10:10.378 Error Log Page Entries Supported: 1 00:10:10.378 Keep Alive: Not Supported 00:10:10.378 00:10:10.378 NVM Command Set Attributes 00:10:10.378 ========================== 00:10:10.378 Submission Queue Entry Size 00:10:10.378 Max: 64 00:10:10.378 Min: 64 00:10:10.378 Completion Queue Entry Size 00:10:10.378 Max: 16 00:10:10.378 Min: 16 00:10:10.378 Number of Namespaces: 256 00:10:10.378 Compare Command: Supported 00:10:10.378 Write Uncorrectable Command: Not Supported 00:10:10.378 Dataset Management Command: Supported 00:10:10.378 Write Zeroes Command: Supported 00:10:10.378 Set Features Save Field: Supported 00:10:10.378 Reservations: Not Supported 00:10:10.378 Timestamp: Supported 00:10:10.379 Copy: Supported 00:10:10.379 Volatile Write Cache: Present 00:10:10.379 Atomic Write Unit (Normal): 1 00:10:10.379 Atomic Write Unit (PFail): 1 00:10:10.379 Atomic Compare & Write Unit: 1 00:10:10.379 Fused Compare & Write: Not Supported 00:10:10.379 Scatter-Gather List 00:10:10.379 SGL Command Set: Supported 00:10:10.379 SGL Keyed: Not Supported 00:10:10.379 SGL Bit Bucket Descriptor: Not Supported 00:10:10.379 SGL Metadata Pointer: Not Supported 00:10:10.379 Oversized SGL: Not Supported 00:10:10.379 SGL Metadata Address: Not Supported 00:10:10.379 SGL Offset: Not Supported 00:10:10.379 Transport SGL Data Block: Not Supported 00:10:10.379 Replay Protected Memory Block: Not Supported 00:10:10.379 00:10:10.379 Firmware Slot Information 00:10:10.379 ========================= 00:10:10.379 Active slot: 1 00:10:10.379 Slot 1 Firmware Revision: 1.0 00:10:10.379 00:10:10.379 00:10:10.379 Commands Supported and Effects 00:10:10.379 ============================== 00:10:10.379 Admin Commands 00:10:10.379 -------------- 00:10:10.379 Delete I/O Submission Queue (00h): Supported 00:10:10.379 Create I/O Submission Queue (01h): Supported 00:10:10.379 Get Log Page (02h): Supported 00:10:10.379 Delete I/O Completion Queue (04h): Supported 00:10:10.379 Create I/O Completion Queue (05h): Supported 00:10:10.379 Identify (06h): Supported 00:10:10.379 Abort (08h): Supported 00:10:10.379 Set Features (09h): Supported 00:10:10.379 Get Features (0Ah): Supported 00:10:10.379 Asynchronous Event Request (0Ch): Supported 00:10:10.379 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:10.379 Directive Send (19h): Supported 00:10:10.379 Directive Receive (1Ah): Supported 00:10:10.379 Virtualization Management (1Ch): Supported 00:10:10.379 Doorbell Buffer Config (7Ch): Supported 00:10:10.379 Format NVM (80h): Supported LBA-Change 00:10:10.379 I/O Commands 00:10:10.379 ------------ 00:10:10.379 Flush (00h): Supported LBA-Change 00:10:10.379 Write (01h): Supported LBA-Change 00:10:10.379 Read (02h): Supported 00:10:10.379 Compare (05h): Supported 00:10:10.379 Write Zeroes (08h): Supported LBA-Change 00:10:10.379 Dataset Management (09h): Supported LBA-Change 00:10:10.379 Unknown (0Ch): Supported 00:10:10.379 Unknown (12h): Supported 00:10:10.379 Copy (19h): Supported LBA-Change 00:10:10.379 Unknown (1Dh): Supported LBA-Change 00:10:10.379 00:10:10.379 Error Log 00:10:10.379 ========= 00:10:10.379 00:10:10.379 Arbitration 00:10:10.379 =========== 00:10:10.379 Arbitration Burst: no limit 00:10:10.379 00:10:10.379 Power Management 00:10:10.379 ================ 00:10:10.379 Number of Power States: 1 00:10:10.379 Current Power State: Power State #0 00:10:10.379 Power State #0: 00:10:10.379 Max Power: 25.00 W 00:10:10.379 Non-Operational State: Operational 00:10:10.379 Entry Latency: 16 microseconds 00:10:10.379 Exit Latency: 4 microseconds 00:10:10.379 Relative Read Throughput: 0 00:10:10.379 Relative Read Latency: 0 00:10:10.379 Relative Write Throughput: 0 00:10:10.379 Relative Write Latency: 0 00:10:10.379 Idle Power: Not Reported 00:10:10.379 Active Power: Not Reported 00:10:10.379 Non-Operational Permissive Mode: Not Supported 00:10:10.379 00:10:10.379 Health Information 00:10:10.379 ================== 00:10:10.379 Critical Warnings: 00:10:10.379 Available Spare Space: OK 00:10:10.379 Temperature: OK 00:10:10.379 Device Reliability: OK 00:10:10.379 Read Only: No 00:10:10.379 Volatile Memory Backup: OK 00:10:10.379 Current Temperature: 323 Kelvin (50 Celsius) 00:10:10.379 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:10.379 Available Spare: 0% 00:10:10.379 Available Spare Threshold: 0% 00:10:10.379 Life Percentage Used: 0% 00:10:10.379 Data Units Read: 814 00:10:10.379 Data Units Written: 708 00:10:10.379 Host Read Commands: 34894 00:10:10.379 Host Write Commands: 33484 00:10:10.379 Controller Busy Time: 0 minutes 00:10:10.379 Power Cycles: 0 00:10:10.379 Power On Hours: 0 hours 00:10:10.379 Unsafe Shutdowns: 0 00:10:10.379 Unrecoverable Media Errors: 0 00:10:10.379 Lifetime Error Log Entries: 0 00:10:10.379 Warning Temperature Time: 0 minutes 00:10:10.379 Critical Temperature Time: 0 minutes 00:10:10.379 00:10:10.379 Number of Queues 00:10:10.379 ================ 00:10:10.379 Number of I/O Submission Queues: 64 00:10:10.379 Number of I/O Completion Queues: 64 00:10:10.379 00:10:10.379 ZNS Specific Controller Data 00:10:10.379 ============================ 00:10:10.379 Zone Append Size Limit: 0 00:10:10.379 00:10:10.379 00:10:10.379 Active Namespaces 00:10:10.379 ================= 00:10:10.379 Namespace ID:1 00:10:10.379 Error Recovery Timeout: Unlimited 00:10:10.379 Command Set Identifier: NVM (00h) 00:10:10.379 Deallocate: Supported 00:10:10.379 Deallocated/Unwritten Error: Supported 00:10:10.379 Deallocated Read Value: All 0x00 00:10:10.379 Deallocate in Write Zeroes: Not Supported 00:10:10.379 Deallocated Guard Field: 0xFFFF 00:10:10.379 Flush: Supported 00:10:10.379 Reservation: Not Supported 00:10:10.379 Namespace Sharing Capabilities: Multiple Controllers 00:10:10.379 Size (in LBAs): 262144 (1GiB) 00:10:10.379 Capacity (in LBAs): 262144 (1GiB) 00:10:10.379 Utilization (in LBAs): 262144 (1GiB) 00:10:10.379 Thin Provisioning: Not Supported 00:10:10.379 Per-NS Atomic Units: No 00:10:10.379 Maximum Single Source Range Length: 128 00:10:10.379 Maximum Copy Length: 128 00:10:10.379 Maximum Source Range Count: 128 00:10:10.379 NGUID/EUI64 Never Reused: No 00:10:10.379 Namespace Write Protected: No 00:10:10.379 Endurance group ID: 1 00:10:10.379 Number of LBA Formats: 8 00:10:10.379 Current LBA Format: LBA Format #04 00:10:10.379 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:10.379 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:10.379 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:10.379 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:10.379 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:10.379 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:10.379 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:10.379 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:10.379 00:10:10.379 Get Feature FDP: 00:10:10.379 ================ 00:10:10.379 Enabled: Yes 00:10:10.379 FDP configuration index: 0 00:10:10.379 00:10:10.379 FDP configurations log page 00:10:10.379 =========================== 00:10:10.379 Number of FDP configurations: 1 00:10:10.379 Version: 0 00:10:10.379 Size: 112 00:10:10.379 FDP Configuration Descriptor: 0 00:10:10.379 Descriptor Size: 96 00:10:10.379 Reclaim Group Identifier format: 2 00:10:10.379 FDP Volatile Write Cache: Not Present 00:10:10.379 FDP Configuration: Valid 00:10:10.379 Vendor Specific Size: 0 00:10:10.379 Number of Reclaim Groups: 2 00:10:10.379 Number of Recalim Unit Handles: 8 00:10:10.379 Max Placement Identifiers: 128 00:10:10.379 Number of Namespaces Suppprted: 256 00:10:10.379 Reclaim unit Nominal Size: 6000000 bytes 00:10:10.379 Estimated Reclaim Unit Time Limit: Not Reported 00:10:10.379 RUH Desc #000: RUH Type: Initially Isolated 00:10:10.379 RUH Desc #001: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #002: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #003: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #004: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #005: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #006: RUH Type: Initially Isolated 00:10:10.380 RUH Desc #007: RUH Type: Initially Isolated 00:10:10.380 00:10:10.380 FDP reclaim unit handle usage log page 00:10:10.380 ====================================== 00:10:10.380 Number of Reclaim Unit Handles: 8 00:10:10.380 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:10.380 RUH Usage Desc #001: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #002: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #003: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #004: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #005: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #006: RUH Attributes: Unused 00:10:10.380 RUH Usage Desc #007: RUH Attributes: Unused 00:10:10.380 00:10:10.380 FDP statistics log page 00:10:10.380 ======================= 00:10:10.380 Host bytes with metadata written: 413769728 00:10:10.380 Media bytes with metadata written: 413822976 00:10:10.380 Media bytes erased: 0 00:10:10.380 00:10:10.380 FDP events log page 00:10:10.380 =================== 00:10:10.380 Number of FDP events: 0 00:10:10.380 00:10:10.380 ************************************ 00:10:10.380 END TEST nvme_identify 00:10:10.380 ************************************ 00:10:10.380 00:10:10.380 real 0m1.396s 00:10:10.380 user 0m0.533s 00:10:10.380 sys 0m0.672s 00:10:10.380 13:09:07 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:10.380 13:09:07 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:10.638 13:09:07 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:10.638 13:09:07 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:10.638 13:09:07 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:10.638 13:09:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.638 ************************************ 00:10:10.638 START TEST nvme_perf 00:10:10.638 ************************************ 00:10:10.638 13:09:07 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:10:10.638 13:09:07 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:12.058 Initializing NVMe Controllers 00:10:12.058 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:12.058 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:12.058 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:12.058 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:12.058 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:12.058 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:12.058 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:12.058 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:12.058 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:12.058 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:12.058 Initialization complete. Launching workers. 00:10:12.058 ======================================================== 00:10:12.058 Latency(us) 00:10:12.058 Device Information : IOPS MiB/s Average min max 00:10:12.058 PCIE (0000:00:10.0) NSID 1 from core 0: 12953.59 151.80 9886.68 6350.04 33168.14 00:10:12.058 PCIE (0000:00:11.0) NSID 1 from core 0: 12953.59 151.80 9877.80 6101.51 32282.27 00:10:12.058 PCIE (0000:00:13.0) NSID 1 from core 0: 12953.59 151.80 9866.47 5386.55 32371.34 00:10:12.058 PCIE (0000:00:12.0) NSID 1 from core 0: 12953.59 151.80 9854.50 5092.70 31640.20 00:10:12.058 PCIE (0000:00:12.0) NSID 2 from core 0: 12953.59 151.80 9842.40 4800.67 30843.69 00:10:12.058 PCIE (0000:00:12.0) NSID 3 from core 0: 12953.59 151.80 9830.30 4420.71 30074.17 00:10:12.058 ======================================================== 00:10:12.058 Total : 77721.56 910.80 9859.69 4420.71 33168.14 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8281.367us 00:10:12.058 10.00000% : 8638.836us 00:10:12.058 25.00000% : 8936.727us 00:10:12.058 50.00000% : 9413.353us 00:10:12.058 75.00000% : 10128.291us 00:10:12.058 90.00000% : 11319.855us 00:10:12.058 95.00000% : 12809.309us 00:10:12.058 98.00000% : 14298.764us 00:10:12.058 99.00000% : 15609.484us 00:10:12.058 99.50000% : 26452.713us 00:10:12.058 99.90000% : 32887.156us 00:10:12.058 99.99000% : 33125.469us 00:10:12.058 99.99900% : 33363.782us 00:10:12.058 99.99990% : 33363.782us 00:10:12.058 99.99999% : 33363.782us 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8400.524us 00:10:12.058 10.00000% : 8698.415us 00:10:12.058 25.00000% : 8936.727us 00:10:12.058 50.00000% : 9353.775us 00:10:12.058 75.00000% : 10187.869us 00:10:12.058 90.00000% : 11200.698us 00:10:12.058 95.00000% : 12868.887us 00:10:12.058 98.00000% : 14298.764us 00:10:12.058 99.00000% : 15966.953us 00:10:12.058 99.50000% : 26333.556us 00:10:12.058 99.90000% : 32172.218us 00:10:12.058 99.99000% : 32410.531us 00:10:12.058 99.99900% : 32410.531us 00:10:12.058 99.99990% : 32410.531us 00:10:12.058 99.99999% : 32410.531us 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8340.945us 00:10:12.058 10.00000% : 8698.415us 00:10:12.058 25.00000% : 8936.727us 00:10:12.058 50.00000% : 9353.775us 00:10:12.058 75.00000% : 10187.869us 00:10:12.058 90.00000% : 11200.698us 00:10:12.058 95.00000% : 12928.465us 00:10:12.058 98.00000% : 14298.764us 00:10:12.058 99.00000% : 16086.109us 00:10:12.058 99.50000% : 26095.244us 00:10:12.058 99.90000% : 32172.218us 00:10:12.058 99.99000% : 32410.531us 00:10:12.058 99.99900% : 32410.531us 00:10:12.058 99.99990% : 32410.531us 00:10:12.058 99.99999% : 32410.531us 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8281.367us 00:10:12.058 10.00000% : 8698.415us 00:10:12.058 25.00000% : 8936.727us 00:10:12.058 50.00000% : 9353.775us 00:10:12.058 75.00000% : 10187.869us 00:10:12.058 90.00000% : 11200.698us 00:10:12.058 95.00000% : 12809.309us 00:10:12.058 98.00000% : 14417.920us 00:10:12.058 99.00000% : 15966.953us 00:10:12.058 99.50000% : 25261.149us 00:10:12.058 99.90000% : 31457.280us 00:10:12.058 99.99000% : 31695.593us 00:10:12.058 99.99900% : 31695.593us 00:10:12.058 99.99990% : 31695.593us 00:10:12.058 99.99999% : 31695.593us 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8162.211us 00:10:12.058 10.00000% : 8698.415us 00:10:12.058 25.00000% : 8936.727us 00:10:12.058 50.00000% : 9353.775us 00:10:12.058 75.00000% : 10187.869us 00:10:12.058 90.00000% : 11141.120us 00:10:12.058 95.00000% : 12570.996us 00:10:12.058 98.00000% : 14477.498us 00:10:12.058 99.00000% : 15490.327us 00:10:12.058 99.50000% : 24546.211us 00:10:12.058 99.90000% : 30742.342us 00:10:12.058 99.99000% : 30980.655us 00:10:12.058 99.99900% : 30980.655us 00:10:12.058 99.99990% : 30980.655us 00:10:12.058 99.99999% : 30980.655us 00:10:12.058 00:10:12.058 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:12.058 ================================================================================= 00:10:12.058 1.00000% : 8221.789us 00:10:12.058 10.00000% : 8698.415us 00:10:12.058 25.00000% : 8996.305us 00:10:12.058 50.00000% : 9353.775us 00:10:12.058 75.00000% : 10187.869us 00:10:12.058 90.00000% : 11200.698us 00:10:12.058 95.00000% : 12690.153us 00:10:12.058 98.00000% : 14239.185us 00:10:12.058 99.00000% : 15073.280us 00:10:12.059 99.50000% : 23712.116us 00:10:12.059 99.90000% : 29789.091us 00:10:12.059 99.99000% : 30146.560us 00:10:12.059 99.99900% : 30146.560us 00:10:12.059 99.99990% : 30146.560us 00:10:12.059 99.99999% : 30146.560us 00:10:12.059 00:10:12.059 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:12.059 ============================================================================== 00:10:12.059 Range in us Cumulative IO count 00:10:12.059 6345.076 - 6374.865: 0.0154% ( 2) 00:10:12.059 6374.865 - 6404.655: 0.0308% ( 2) 00:10:12.059 6404.655 - 6434.444: 0.0539% ( 3) 00:10:12.059 6464.233 - 6494.022: 0.0770% ( 3) 00:10:12.059 6494.022 - 6523.811: 0.0847% ( 1) 00:10:12.059 6523.811 - 6553.600: 0.1001% ( 2) 00:10:12.059 6553.600 - 6583.389: 0.1155% ( 2) 00:10:12.059 6583.389 - 6613.178: 0.1308% ( 2) 00:10:12.059 6613.178 - 6642.967: 0.1385% ( 1) 00:10:12.059 6642.967 - 6672.756: 0.1462% ( 1) 00:10:12.059 6672.756 - 6702.545: 0.1693% ( 3) 00:10:12.059 6702.545 - 6732.335: 0.1770% ( 1) 00:10:12.059 6732.335 - 6762.124: 0.2001% ( 3) 00:10:12.059 6762.124 - 6791.913: 0.2078% ( 1) 00:10:12.059 6791.913 - 6821.702: 0.2232% ( 2) 00:10:12.059 6851.491 - 6881.280: 0.2540% ( 4) 00:10:12.059 6911.069 - 6940.858: 0.2694% ( 2) 00:10:12.059 6940.858 - 6970.647: 0.2848% ( 2) 00:10:12.059 7000.436 - 7030.225: 0.3079% ( 3) 00:10:12.059 7030.225 - 7060.015: 0.3233% ( 2) 00:10:12.059 7060.015 - 7089.804: 0.3310% ( 1) 00:10:12.059 7119.593 - 7149.382: 0.3541% ( 3) 00:10:12.059 7149.382 - 7179.171: 0.3618% ( 1) 00:10:12.059 7179.171 - 7208.960: 0.3695% ( 1) 00:10:12.059 7208.960 - 7238.749: 0.3849% ( 2) 00:10:12.059 7238.749 - 7268.538: 0.3925% ( 1) 00:10:12.059 7268.538 - 7298.327: 0.4079% ( 2) 00:10:12.059 7298.327 - 7328.116: 0.4156% ( 1) 00:10:12.059 7328.116 - 7357.905: 0.4310% ( 2) 00:10:12.059 7357.905 - 7387.695: 0.4387% ( 1) 00:10:12.059 7387.695 - 7417.484: 0.4464% ( 1) 00:10:12.059 7417.484 - 7447.273: 0.4695% ( 3) 00:10:12.059 7447.273 - 7477.062: 0.4772% ( 1) 00:10:12.059 7477.062 - 7506.851: 0.4926% ( 2) 00:10:12.059 8043.055 - 8102.633: 0.5157% ( 3) 00:10:12.059 8102.633 - 8162.211: 0.5927% ( 10) 00:10:12.059 8162.211 - 8221.789: 0.8236% ( 30) 00:10:12.059 8221.789 - 8281.367: 1.1084% ( 37) 00:10:12.059 8281.367 - 8340.945: 1.6549% ( 71) 00:10:12.059 8340.945 - 8400.524: 2.4015% ( 97) 00:10:12.059 8400.524 - 8460.102: 3.6792% ( 166) 00:10:12.059 8460.102 - 8519.680: 5.5727% ( 246) 00:10:12.059 8519.680 - 8579.258: 7.8818% ( 300) 00:10:12.059 8579.258 - 8638.836: 10.6912% ( 365) 00:10:12.059 8638.836 - 8698.415: 13.4929% ( 364) 00:10:12.059 8698.415 - 8757.993: 16.6025% ( 404) 00:10:12.059 8757.993 - 8817.571: 19.8276% ( 419) 00:10:12.059 8817.571 - 8877.149: 23.0450% ( 418) 00:10:12.059 8877.149 - 8936.727: 26.3085% ( 424) 00:10:12.059 8936.727 - 8996.305: 29.5259% ( 418) 00:10:12.059 8996.305 - 9055.884: 32.6893% ( 411) 00:10:12.059 9055.884 - 9115.462: 35.9760% ( 427) 00:10:12.059 9115.462 - 9175.040: 39.2395% ( 424) 00:10:12.059 9175.040 - 9234.618: 42.4954% ( 423) 00:10:12.059 9234.618 - 9294.196: 45.6820% ( 414) 00:10:12.059 9294.196 - 9353.775: 48.8685% ( 414) 00:10:12.059 9353.775 - 9413.353: 52.1090% ( 421) 00:10:12.059 9413.353 - 9472.931: 55.2417% ( 407) 00:10:12.059 9472.931 - 9532.509: 58.3513% ( 404) 00:10:12.059 9532.509 - 9592.087: 61.2916% ( 382) 00:10:12.059 9592.087 - 9651.665: 63.7469% ( 319) 00:10:12.059 9651.665 - 9711.244: 65.8405% ( 272) 00:10:12.059 9711.244 - 9770.822: 67.7263% ( 245) 00:10:12.059 9770.822 - 9830.400: 69.2503% ( 198) 00:10:12.059 9830.400 - 9889.978: 70.5896% ( 174) 00:10:12.059 9889.978 - 9949.556: 71.8057% ( 158) 00:10:12.059 9949.556 - 10009.135: 73.0373% ( 160) 00:10:12.059 10009.135 - 10068.713: 74.0071% ( 126) 00:10:12.059 10068.713 - 10128.291: 75.0385% ( 134) 00:10:12.059 10128.291 - 10187.869: 76.0237% ( 128) 00:10:12.059 10187.869 - 10247.447: 76.9781% ( 124) 00:10:12.059 10247.447 - 10307.025: 77.9711% ( 129) 00:10:12.059 10307.025 - 10366.604: 78.9101% ( 122) 00:10:12.059 10366.604 - 10426.182: 79.7645% ( 111) 00:10:12.059 10426.182 - 10485.760: 80.6496% ( 115) 00:10:12.059 10485.760 - 10545.338: 81.5810% ( 121) 00:10:12.059 10545.338 - 10604.916: 82.4584% ( 114) 00:10:12.059 10604.916 - 10664.495: 83.3821% ( 120) 00:10:12.059 10664.495 - 10724.073: 84.2903% ( 118) 00:10:12.059 10724.073 - 10783.651: 85.1293% ( 109) 00:10:12.059 10783.651 - 10843.229: 86.0299% ( 117) 00:10:12.059 10843.229 - 10902.807: 86.7688% ( 96) 00:10:12.059 10902.807 - 10962.385: 87.4846% ( 93) 00:10:12.059 10962.385 - 11021.964: 88.1389% ( 85) 00:10:12.059 11021.964 - 11081.542: 88.7392% ( 78) 00:10:12.059 11081.542 - 11141.120: 89.1703% ( 56) 00:10:12.059 11141.120 - 11200.698: 89.6013% ( 56) 00:10:12.059 11200.698 - 11260.276: 89.9092% ( 40) 00:10:12.059 11260.276 - 11319.855: 90.2094% ( 39) 00:10:12.059 11319.855 - 11379.433: 90.4942% ( 37) 00:10:12.059 11379.433 - 11439.011: 90.8097% ( 41) 00:10:12.059 11439.011 - 11498.589: 91.0637% ( 33) 00:10:12.059 11498.589 - 11558.167: 91.3562% ( 38) 00:10:12.059 11558.167 - 11617.745: 91.6102% ( 33) 00:10:12.059 11617.745 - 11677.324: 91.8334% ( 29) 00:10:12.059 11677.324 - 11736.902: 92.0643% ( 30) 00:10:12.059 11736.902 - 11796.480: 92.2491% ( 24) 00:10:12.059 11796.480 - 11856.058: 92.4261% ( 23) 00:10:12.059 11856.058 - 11915.636: 92.5954% ( 22) 00:10:12.059 11915.636 - 11975.215: 92.7956% ( 26) 00:10:12.059 11975.215 - 12034.793: 93.0419% ( 32) 00:10:12.059 12034.793 - 12094.371: 93.1881% ( 19) 00:10:12.059 12094.371 - 12153.949: 93.4190% ( 30) 00:10:12.059 12153.949 - 12213.527: 93.6192% ( 26) 00:10:12.059 12213.527 - 12273.105: 93.8116% ( 25) 00:10:12.059 12273.105 - 12332.684: 94.0040% ( 25) 00:10:12.059 12332.684 - 12392.262: 94.1579% ( 20) 00:10:12.059 12392.262 - 12451.840: 94.3042% ( 19) 00:10:12.059 12451.840 - 12511.418: 94.4581% ( 20) 00:10:12.059 12511.418 - 12570.996: 94.5736% ( 15) 00:10:12.059 12570.996 - 12630.575: 94.7044% ( 17) 00:10:12.059 12630.575 - 12690.153: 94.7968% ( 12) 00:10:12.059 12690.153 - 12749.731: 94.9353% ( 18) 00:10:12.059 12749.731 - 12809.309: 95.0277% ( 12) 00:10:12.059 12809.309 - 12868.887: 95.1432% ( 15) 00:10:12.059 12868.887 - 12928.465: 95.2740% ( 17) 00:10:12.059 12928.465 - 12988.044: 95.3818% ( 14) 00:10:12.059 12988.044 - 13047.622: 95.5280% ( 19) 00:10:12.059 13047.622 - 13107.200: 95.6127% ( 11) 00:10:12.059 13107.200 - 13166.778: 95.7358% ( 16) 00:10:12.059 13166.778 - 13226.356: 95.8590% ( 16) 00:10:12.059 13226.356 - 13285.935: 95.9975% ( 18) 00:10:12.059 13285.935 - 13345.513: 96.1900% ( 25) 00:10:12.059 13345.513 - 13405.091: 96.3593% ( 22) 00:10:12.059 13405.091 - 13464.669: 96.5132% ( 20) 00:10:12.059 13464.669 - 13524.247: 96.6672% ( 20) 00:10:12.059 13524.247 - 13583.825: 96.7903% ( 16) 00:10:12.059 13583.825 - 13643.404: 96.9058% ( 15) 00:10:12.059 13643.404 - 13702.982: 97.0289% ( 16) 00:10:12.059 13702.982 - 13762.560: 97.1444% ( 15) 00:10:12.059 13762.560 - 13822.138: 97.2522% ( 14) 00:10:12.059 13822.138 - 13881.716: 97.3676% ( 15) 00:10:12.059 13881.716 - 13941.295: 97.4831% ( 15) 00:10:12.059 13941.295 - 14000.873: 97.5677% ( 11) 00:10:12.059 14000.873 - 14060.451: 97.6678% ( 13) 00:10:12.059 14060.451 - 14120.029: 97.7756% ( 14) 00:10:12.059 14120.029 - 14179.607: 97.8602% ( 11) 00:10:12.059 14179.607 - 14239.185: 97.9680% ( 14) 00:10:12.059 14239.185 - 14298.764: 98.0757% ( 14) 00:10:12.059 14298.764 - 14358.342: 98.1835% ( 14) 00:10:12.059 14358.342 - 14417.920: 98.2913% ( 14) 00:10:12.059 14417.920 - 14477.498: 98.3836% ( 12) 00:10:12.059 14477.498 - 14537.076: 98.4760% ( 12) 00:10:12.059 14537.076 - 14596.655: 98.5299% ( 7) 00:10:12.059 14596.655 - 14656.233: 98.5683% ( 5) 00:10:12.059 14656.233 - 14715.811: 98.6299% ( 8) 00:10:12.059 14715.811 - 14775.389: 98.7146% ( 11) 00:10:12.059 14775.389 - 14834.967: 98.7454% ( 4) 00:10:12.059 14834.967 - 14894.545: 98.7839% ( 5) 00:10:12.059 14894.545 - 14954.124: 98.8147% ( 4) 00:10:12.059 14954.124 - 15013.702: 98.8300% ( 2) 00:10:12.059 15013.702 - 15073.280: 98.8531% ( 3) 00:10:12.059 15073.280 - 15132.858: 98.8762% ( 3) 00:10:12.059 15132.858 - 15192.436: 98.8839% ( 1) 00:10:12.059 15192.436 - 15252.015: 98.9070% ( 3) 00:10:12.059 15252.015 - 15371.171: 98.9532% ( 6) 00:10:12.059 15371.171 - 15490.327: 98.9917% ( 5) 00:10:12.059 15490.327 - 15609.484: 99.0148% ( 3) 00:10:12.059 24784.524 - 24903.680: 99.0379% ( 3) 00:10:12.059 24903.680 - 25022.836: 99.0841% ( 6) 00:10:12.059 25022.836 - 25141.993: 99.1225% ( 5) 00:10:12.059 25141.993 - 25261.149: 99.1610% ( 5) 00:10:12.059 25261.149 - 25380.305: 99.1918% ( 4) 00:10:12.059 25380.305 - 25499.462: 99.2303% ( 5) 00:10:12.059 25499.462 - 25618.618: 99.2688% ( 5) 00:10:12.059 25618.618 - 25737.775: 99.2996% ( 4) 00:10:12.059 25737.775 - 25856.931: 99.3381% ( 5) 00:10:12.059 25856.931 - 25976.087: 99.3688% ( 4) 00:10:12.059 25976.087 - 26095.244: 99.4227% ( 7) 00:10:12.059 26095.244 - 26214.400: 99.4535% ( 4) 00:10:12.059 26214.400 - 26333.556: 99.4920% ( 5) 00:10:12.059 26333.556 - 26452.713: 99.5074% ( 2) 00:10:12.059 31457.280 - 31695.593: 99.5228% ( 2) 00:10:12.059 31695.593 - 31933.905: 99.6075% ( 11) 00:10:12.059 31933.905 - 32172.218: 99.6690% ( 8) 00:10:12.059 32172.218 - 32410.531: 99.7614% ( 12) 00:10:12.059 32410.531 - 32648.844: 99.8384% ( 10) 00:10:12.059 32648.844 - 32887.156: 99.9153% ( 10) 00:10:12.059 32887.156 - 33125.469: 99.9923% ( 10) 00:10:12.059 33125.469 - 33363.782: 100.0000% ( 1) 00:10:12.059 00:10:12.059 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:12.059 ============================================================================== 00:10:12.059 Range in us Cumulative IO count 00:10:12.059 6076.975 - 6106.764: 0.0077% ( 1) 00:10:12.059 6106.764 - 6136.553: 0.0231% ( 2) 00:10:12.059 6136.553 - 6166.342: 0.0385% ( 2) 00:10:12.059 6166.342 - 6196.131: 0.0539% ( 2) 00:10:12.059 6196.131 - 6225.920: 0.0693% ( 2) 00:10:12.059 6225.920 - 6255.709: 0.0847% ( 2) 00:10:12.059 6255.709 - 6285.498: 0.1001% ( 2) 00:10:12.060 6285.498 - 6315.287: 0.1078% ( 1) 00:10:12.060 6315.287 - 6345.076: 0.1155% ( 1) 00:10:12.060 6345.076 - 6374.865: 0.1308% ( 2) 00:10:12.060 6374.865 - 6404.655: 0.1462% ( 2) 00:10:12.060 6404.655 - 6434.444: 0.1616% ( 2) 00:10:12.060 6434.444 - 6464.233: 0.1770% ( 2) 00:10:12.060 6464.233 - 6494.022: 0.1924% ( 2) 00:10:12.060 6494.022 - 6523.811: 0.2078% ( 2) 00:10:12.060 6523.811 - 6553.600: 0.2232% ( 2) 00:10:12.060 6553.600 - 6583.389: 0.2463% ( 3) 00:10:12.060 6583.389 - 6613.178: 0.2617% ( 2) 00:10:12.060 6613.178 - 6642.967: 0.2771% ( 2) 00:10:12.060 6642.967 - 6672.756: 0.2925% ( 2) 00:10:12.060 6672.756 - 6702.545: 0.3079% ( 2) 00:10:12.060 6702.545 - 6732.335: 0.3233% ( 2) 00:10:12.060 6732.335 - 6762.124: 0.3387% ( 2) 00:10:12.060 6762.124 - 6791.913: 0.3464% ( 1) 00:10:12.060 6791.913 - 6821.702: 0.3618% ( 2) 00:10:12.060 6821.702 - 6851.491: 0.3772% ( 2) 00:10:12.060 6851.491 - 6881.280: 0.3925% ( 2) 00:10:12.060 6881.280 - 6911.069: 0.4079% ( 2) 00:10:12.060 6911.069 - 6940.858: 0.4233% ( 2) 00:10:12.060 6940.858 - 6970.647: 0.4464% ( 3) 00:10:12.060 6970.647 - 7000.436: 0.4618% ( 2) 00:10:12.060 7000.436 - 7030.225: 0.4772% ( 2) 00:10:12.060 7030.225 - 7060.015: 0.4926% ( 2) 00:10:12.060 8102.633 - 8162.211: 0.5003% ( 1) 00:10:12.060 8162.211 - 8221.789: 0.5388% ( 5) 00:10:12.060 8221.789 - 8281.367: 0.6466% ( 14) 00:10:12.060 8281.367 - 8340.945: 0.9083% ( 34) 00:10:12.060 8340.945 - 8400.524: 1.3701% ( 60) 00:10:12.060 8400.524 - 8460.102: 2.1783% ( 105) 00:10:12.060 8460.102 - 8519.680: 3.2405% ( 138) 00:10:12.060 8519.680 - 8579.258: 4.9723% ( 225) 00:10:12.060 8579.258 - 8638.836: 7.4123% ( 317) 00:10:12.060 8638.836 - 8698.415: 10.3217% ( 378) 00:10:12.060 8698.415 - 8757.993: 13.5699% ( 422) 00:10:12.060 8757.993 - 8817.571: 17.2799% ( 482) 00:10:12.060 8817.571 - 8877.149: 21.1284% ( 500) 00:10:12.060 8877.149 - 8936.727: 25.0385% ( 508) 00:10:12.060 8936.727 - 8996.305: 28.9563% ( 509) 00:10:12.060 8996.305 - 9055.884: 32.8972% ( 512) 00:10:12.060 9055.884 - 9115.462: 36.8534% ( 514) 00:10:12.060 9115.462 - 9175.040: 40.6943% ( 499) 00:10:12.060 9175.040 - 9234.618: 44.5582% ( 502) 00:10:12.060 9234.618 - 9294.196: 48.2374% ( 478) 00:10:12.060 9294.196 - 9353.775: 51.7164% ( 452) 00:10:12.060 9353.775 - 9413.353: 55.0108% ( 428) 00:10:12.060 9413.353 - 9472.931: 58.1435% ( 407) 00:10:12.060 9472.931 - 9532.509: 60.9221% ( 361) 00:10:12.060 9532.509 - 9592.087: 63.3082% ( 310) 00:10:12.060 9592.087 - 9651.665: 65.1401% ( 238) 00:10:12.060 9651.665 - 9711.244: 66.7719% ( 212) 00:10:12.060 9711.244 - 9770.822: 68.1419% ( 178) 00:10:12.060 9770.822 - 9830.400: 69.3812% ( 161) 00:10:12.060 9830.400 - 9889.978: 70.6127% ( 160) 00:10:12.060 9889.978 - 9949.556: 71.7672% ( 150) 00:10:12.060 9949.556 - 10009.135: 72.8756% ( 144) 00:10:12.060 10009.135 - 10068.713: 73.9532% ( 140) 00:10:12.060 10068.713 - 10128.291: 74.9846% ( 134) 00:10:12.060 10128.291 - 10187.869: 76.0776% ( 142) 00:10:12.060 10187.869 - 10247.447: 77.1244% ( 136) 00:10:12.060 10247.447 - 10307.025: 78.1558% ( 134) 00:10:12.060 10307.025 - 10366.604: 79.3026% ( 149) 00:10:12.060 10366.604 - 10426.182: 80.3956% ( 142) 00:10:12.060 10426.182 - 10485.760: 81.5117% ( 145) 00:10:12.060 10485.760 - 10545.338: 82.6509% ( 148) 00:10:12.060 10545.338 - 10604.916: 83.7054% ( 137) 00:10:12.060 10604.916 - 10664.495: 84.7445% ( 135) 00:10:12.060 10664.495 - 10724.073: 85.7220% ( 127) 00:10:12.060 10724.073 - 10783.651: 86.5687% ( 110) 00:10:12.060 10783.651 - 10843.229: 87.3384% ( 100) 00:10:12.060 10843.229 - 10902.807: 87.9926% ( 85) 00:10:12.060 10902.807 - 10962.385: 88.5314% ( 70) 00:10:12.060 10962.385 - 11021.964: 88.9855% ( 59) 00:10:12.060 11021.964 - 11081.542: 89.3935% ( 53) 00:10:12.060 11081.542 - 11141.120: 89.6937% ( 39) 00:10:12.060 11141.120 - 11200.698: 90.0092% ( 41) 00:10:12.060 11200.698 - 11260.276: 90.3171% ( 40) 00:10:12.060 11260.276 - 11319.855: 90.5634% ( 32) 00:10:12.060 11319.855 - 11379.433: 90.7635% ( 26) 00:10:12.060 11379.433 - 11439.011: 90.9714% ( 27) 00:10:12.060 11439.011 - 11498.589: 91.1715% ( 26) 00:10:12.060 11498.589 - 11558.167: 91.3485% ( 23) 00:10:12.060 11558.167 - 11617.745: 91.5102% ( 21) 00:10:12.060 11617.745 - 11677.324: 91.7026% ( 25) 00:10:12.060 11677.324 - 11736.902: 91.9027% ( 26) 00:10:12.060 11736.902 - 11796.480: 92.0797% ( 23) 00:10:12.060 11796.480 - 11856.058: 92.2722% ( 25) 00:10:12.060 11856.058 - 11915.636: 92.4107% ( 18) 00:10:12.060 11915.636 - 11975.215: 92.5724% ( 21) 00:10:12.060 11975.215 - 12034.793: 92.7417% ( 22) 00:10:12.060 12034.793 - 12094.371: 92.9264% ( 24) 00:10:12.060 12094.371 - 12153.949: 93.1419% ( 28) 00:10:12.060 12153.949 - 12213.527: 93.3267% ( 24) 00:10:12.060 12213.527 - 12273.105: 93.5037% ( 23) 00:10:12.060 12273.105 - 12332.684: 93.6576% ( 20) 00:10:12.060 12332.684 - 12392.262: 93.8270% ( 22) 00:10:12.060 12392.262 - 12451.840: 94.0040% ( 23) 00:10:12.060 12451.840 - 12511.418: 94.1502% ( 19) 00:10:12.060 12511.418 - 12570.996: 94.3273% ( 23) 00:10:12.060 12570.996 - 12630.575: 94.4735% ( 19) 00:10:12.060 12630.575 - 12690.153: 94.6275% ( 20) 00:10:12.060 12690.153 - 12749.731: 94.7891% ( 21) 00:10:12.060 12749.731 - 12809.309: 94.9507% ( 21) 00:10:12.060 12809.309 - 12868.887: 95.1201% ( 22) 00:10:12.060 12868.887 - 12928.465: 95.2817% ( 21) 00:10:12.060 12928.465 - 12988.044: 95.4049% ( 16) 00:10:12.060 12988.044 - 13047.622: 95.5357% ( 17) 00:10:12.060 13047.622 - 13107.200: 95.6974% ( 21) 00:10:12.060 13107.200 - 13166.778: 95.8513% ( 20) 00:10:12.060 13166.778 - 13226.356: 96.0129% ( 21) 00:10:12.060 13226.356 - 13285.935: 96.1746% ( 21) 00:10:12.060 13285.935 - 13345.513: 96.2900% ( 15) 00:10:12.060 13345.513 - 13405.091: 96.3978% ( 14) 00:10:12.060 13405.091 - 13464.669: 96.5055% ( 14) 00:10:12.060 13464.669 - 13524.247: 96.6364% ( 17) 00:10:12.060 13524.247 - 13583.825: 96.7672% ( 17) 00:10:12.060 13583.825 - 13643.404: 96.8827% ( 15) 00:10:12.060 13643.404 - 13702.982: 96.9905% ( 14) 00:10:12.060 13702.982 - 13762.560: 97.0905% ( 13) 00:10:12.060 13762.560 - 13822.138: 97.1906% ( 13) 00:10:12.060 13822.138 - 13881.716: 97.2983% ( 14) 00:10:12.060 13881.716 - 13941.295: 97.4138% ( 15) 00:10:12.060 13941.295 - 14000.873: 97.5216% ( 14) 00:10:12.060 14000.873 - 14060.451: 97.6216% ( 13) 00:10:12.060 14060.451 - 14120.029: 97.7217% ( 13) 00:10:12.060 14120.029 - 14179.607: 97.8294% ( 14) 00:10:12.060 14179.607 - 14239.185: 97.9218% ( 12) 00:10:12.060 14239.185 - 14298.764: 98.0065% ( 11) 00:10:12.060 14298.764 - 14358.342: 98.0757% ( 9) 00:10:12.060 14358.342 - 14417.920: 98.1681% ( 12) 00:10:12.060 14417.920 - 14477.498: 98.2451% ( 10) 00:10:12.060 14477.498 - 14537.076: 98.3220% ( 10) 00:10:12.060 14537.076 - 14596.655: 98.3836% ( 8) 00:10:12.060 14596.655 - 14656.233: 98.4298% ( 6) 00:10:12.060 14656.233 - 14715.811: 98.4760% ( 6) 00:10:12.060 14715.811 - 14775.389: 98.5299% ( 7) 00:10:12.060 14775.389 - 14834.967: 98.5760% ( 6) 00:10:12.060 14834.967 - 14894.545: 98.5991% ( 3) 00:10:12.060 14894.545 - 14954.124: 98.6222% ( 3) 00:10:12.060 14954.124 - 15013.702: 98.6530% ( 4) 00:10:12.060 15013.702 - 15073.280: 98.6761% ( 3) 00:10:12.060 15073.280 - 15132.858: 98.7069% ( 4) 00:10:12.060 15132.858 - 15192.436: 98.7300% ( 3) 00:10:12.060 15192.436 - 15252.015: 98.7531% ( 3) 00:10:12.060 15252.015 - 15371.171: 98.8070% ( 7) 00:10:12.060 15371.171 - 15490.327: 98.8608% ( 7) 00:10:12.060 15490.327 - 15609.484: 98.9070% ( 6) 00:10:12.060 15609.484 - 15728.640: 98.9455% ( 5) 00:10:12.060 15728.640 - 15847.796: 98.9994% ( 7) 00:10:12.060 15847.796 - 15966.953: 99.0148% ( 2) 00:10:12.060 24784.524 - 24903.680: 99.0225% ( 1) 00:10:12.060 24903.680 - 25022.836: 99.0610% ( 5) 00:10:12.060 25022.836 - 25141.993: 99.1071% ( 6) 00:10:12.060 25141.993 - 25261.149: 99.1456% ( 5) 00:10:12.060 25261.149 - 25380.305: 99.1918% ( 6) 00:10:12.060 25380.305 - 25499.462: 99.2303% ( 5) 00:10:12.060 25499.462 - 25618.618: 99.2765% ( 6) 00:10:12.060 25618.618 - 25737.775: 99.3150% ( 5) 00:10:12.060 25737.775 - 25856.931: 99.3611% ( 6) 00:10:12.060 25856.931 - 25976.087: 99.4073% ( 6) 00:10:12.060 25976.087 - 26095.244: 99.4535% ( 6) 00:10:12.060 26095.244 - 26214.400: 99.4920% ( 5) 00:10:12.060 26214.400 - 26333.556: 99.5074% ( 2) 00:10:12.060 30742.342 - 30980.655: 99.5228% ( 2) 00:10:12.060 30980.655 - 31218.967: 99.6151% ( 12) 00:10:12.060 31218.967 - 31457.280: 99.6844% ( 9) 00:10:12.060 31457.280 - 31695.593: 99.7768% ( 12) 00:10:12.060 31695.593 - 31933.905: 99.8692% ( 12) 00:10:12.060 31933.905 - 32172.218: 99.9538% ( 11) 00:10:12.060 32172.218 - 32410.531: 100.0000% ( 6) 00:10:12.060 00:10:12.060 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:12.060 ============================================================================== 00:10:12.060 Range in us Cumulative IO count 00:10:12.060 5362.036 - 5391.825: 0.0154% ( 2) 00:10:12.060 5391.825 - 5421.615: 0.0308% ( 2) 00:10:12.060 5421.615 - 5451.404: 0.0385% ( 1) 00:10:12.060 5451.404 - 5481.193: 0.0539% ( 2) 00:10:12.060 5481.193 - 5510.982: 0.0693% ( 2) 00:10:12.060 5510.982 - 5540.771: 0.0847% ( 2) 00:10:12.060 5540.771 - 5570.560: 0.1001% ( 2) 00:10:12.060 5570.560 - 5600.349: 0.1232% ( 3) 00:10:12.060 5600.349 - 5630.138: 0.1385% ( 2) 00:10:12.060 5630.138 - 5659.927: 0.1539% ( 2) 00:10:12.060 5659.927 - 5689.716: 0.1616% ( 1) 00:10:12.060 5689.716 - 5719.505: 0.1770% ( 2) 00:10:12.060 5719.505 - 5749.295: 0.1924% ( 2) 00:10:12.060 5749.295 - 5779.084: 0.2078% ( 2) 00:10:12.060 5779.084 - 5808.873: 0.2232% ( 2) 00:10:12.060 5808.873 - 5838.662: 0.2386% ( 2) 00:10:12.060 5838.662 - 5868.451: 0.2540% ( 2) 00:10:12.060 5868.451 - 5898.240: 0.2694% ( 2) 00:10:12.061 5898.240 - 5928.029: 0.2925% ( 3) 00:10:12.061 5928.029 - 5957.818: 0.3079% ( 2) 00:10:12.061 5957.818 - 5987.607: 0.3233% ( 2) 00:10:12.061 5987.607 - 6017.396: 0.3387% ( 2) 00:10:12.061 6017.396 - 6047.185: 0.3541% ( 2) 00:10:12.061 6047.185 - 6076.975: 0.3695% ( 2) 00:10:12.061 6076.975 - 6106.764: 0.3849% ( 2) 00:10:12.061 6106.764 - 6136.553: 0.4002% ( 2) 00:10:12.061 6136.553 - 6166.342: 0.4156% ( 2) 00:10:12.061 6166.342 - 6196.131: 0.4233% ( 1) 00:10:12.061 6196.131 - 6225.920: 0.4387% ( 2) 00:10:12.061 6225.920 - 6255.709: 0.4541% ( 2) 00:10:12.061 6255.709 - 6285.498: 0.4695% ( 2) 00:10:12.061 6285.498 - 6315.287: 0.4849% ( 2) 00:10:12.061 6315.287 - 6345.076: 0.4926% ( 1) 00:10:12.061 7864.320 - 7923.898: 0.5311% ( 5) 00:10:12.061 7923.898 - 7983.476: 0.5619% ( 4) 00:10:12.061 7983.476 - 8043.055: 0.5850% ( 3) 00:10:12.061 8043.055 - 8102.633: 0.6389% ( 7) 00:10:12.061 8102.633 - 8162.211: 0.6696% ( 4) 00:10:12.061 8162.211 - 8221.789: 0.7081% ( 5) 00:10:12.061 8221.789 - 8281.367: 0.8544% ( 19) 00:10:12.061 8281.367 - 8340.945: 1.1161% ( 34) 00:10:12.061 8340.945 - 8400.524: 1.6010% ( 63) 00:10:12.061 8400.524 - 8460.102: 2.4631% ( 112) 00:10:12.061 8460.102 - 8519.680: 3.7792% ( 171) 00:10:12.061 8519.680 - 8579.258: 5.5342% ( 228) 00:10:12.061 8579.258 - 8638.836: 7.8279% ( 298) 00:10:12.061 8638.836 - 8698.415: 10.8759% ( 396) 00:10:12.061 8698.415 - 8757.993: 14.2626% ( 440) 00:10:12.061 8757.993 - 8817.571: 17.9187% ( 475) 00:10:12.061 8817.571 - 8877.149: 21.7288% ( 495) 00:10:12.061 8877.149 - 8936.727: 25.4772% ( 487) 00:10:12.061 8936.727 - 8996.305: 29.2950% ( 496) 00:10:12.061 8996.305 - 9055.884: 33.1512% ( 501) 00:10:12.061 9055.884 - 9115.462: 36.9535% ( 494) 00:10:12.061 9115.462 - 9175.040: 40.7866% ( 498) 00:10:12.061 9175.040 - 9234.618: 44.5582% ( 490) 00:10:12.061 9234.618 - 9294.196: 48.1989% ( 473) 00:10:12.061 9294.196 - 9353.775: 51.6780% ( 452) 00:10:12.061 9353.775 - 9413.353: 54.9184% ( 421) 00:10:12.061 9413.353 - 9472.931: 57.8664% ( 383) 00:10:12.061 9472.931 - 9532.509: 60.5834% ( 353) 00:10:12.061 9532.509 - 9592.087: 62.9772% ( 311) 00:10:12.061 9592.087 - 9651.665: 64.9400% ( 255) 00:10:12.061 9651.665 - 9711.244: 66.6564% ( 223) 00:10:12.061 9711.244 - 9770.822: 68.1496% ( 194) 00:10:12.061 9770.822 - 9830.400: 69.4889% ( 174) 00:10:12.061 9830.400 - 9889.978: 70.8128% ( 172) 00:10:12.061 9889.978 - 9949.556: 71.8596% ( 136) 00:10:12.061 9949.556 - 10009.135: 72.9372% ( 140) 00:10:12.061 10009.135 - 10068.713: 73.9147% ( 127) 00:10:12.061 10068.713 - 10128.291: 74.9384% ( 133) 00:10:12.061 10128.291 - 10187.869: 76.0006% ( 138) 00:10:12.061 10187.869 - 10247.447: 77.0936% ( 142) 00:10:12.061 10247.447 - 10307.025: 78.2482% ( 150) 00:10:12.061 10307.025 - 10366.604: 79.3873% ( 148) 00:10:12.061 10366.604 - 10426.182: 80.4572% ( 139) 00:10:12.061 10426.182 - 10485.760: 81.5733% ( 145) 00:10:12.061 10485.760 - 10545.338: 82.6586% ( 141) 00:10:12.061 10545.338 - 10604.916: 83.7669% ( 144) 00:10:12.061 10604.916 - 10664.495: 84.8599% ( 142) 00:10:12.061 10664.495 - 10724.073: 85.8605% ( 130) 00:10:12.061 10724.073 - 10783.651: 86.7688% ( 118) 00:10:12.061 10783.651 - 10843.229: 87.6155% ( 110) 00:10:12.061 10843.229 - 10902.807: 88.2466% ( 82) 00:10:12.061 10902.807 - 10962.385: 88.7315% ( 63) 00:10:12.061 10962.385 - 11021.964: 89.1549% ( 55) 00:10:12.061 11021.964 - 11081.542: 89.5705% ( 54) 00:10:12.061 11081.542 - 11141.120: 89.9784% ( 53) 00:10:12.061 11141.120 - 11200.698: 90.3094% ( 43) 00:10:12.061 11200.698 - 11260.276: 90.6173% ( 40) 00:10:12.061 11260.276 - 11319.855: 90.9021% ( 37) 00:10:12.061 11319.855 - 11379.433: 91.1638% ( 34) 00:10:12.061 11379.433 - 11439.011: 91.3870% ( 29) 00:10:12.061 11439.011 - 11498.589: 91.6102% ( 29) 00:10:12.061 11498.589 - 11558.167: 91.8103% ( 26) 00:10:12.061 11558.167 - 11617.745: 92.0182% ( 27) 00:10:12.061 11617.745 - 11677.324: 92.1875% ( 22) 00:10:12.061 11677.324 - 11736.902: 92.3722% ( 24) 00:10:12.061 11736.902 - 11796.480: 92.5954% ( 29) 00:10:12.061 11796.480 - 11856.058: 92.7494% ( 20) 00:10:12.061 11856.058 - 11915.636: 92.9110% ( 21) 00:10:12.061 11915.636 - 11975.215: 93.0265% ( 15) 00:10:12.061 11975.215 - 12034.793: 93.1496% ( 16) 00:10:12.061 12034.793 - 12094.371: 93.2651% ( 15) 00:10:12.061 12094.371 - 12153.949: 93.3728% ( 14) 00:10:12.061 12153.949 - 12213.527: 93.5191% ( 19) 00:10:12.061 12213.527 - 12273.105: 93.6422% ( 16) 00:10:12.061 12273.105 - 12332.684: 93.7423% ( 13) 00:10:12.061 12332.684 - 12392.262: 93.8270% ( 11) 00:10:12.061 12392.262 - 12451.840: 93.9732% ( 19) 00:10:12.061 12451.840 - 12511.418: 94.0733% ( 13) 00:10:12.061 12511.418 - 12570.996: 94.1964% ( 16) 00:10:12.061 12570.996 - 12630.575: 94.3350% ( 18) 00:10:12.061 12630.575 - 12690.153: 94.4658% ( 17) 00:10:12.061 12690.153 - 12749.731: 94.6198% ( 20) 00:10:12.061 12749.731 - 12809.309: 94.7660% ( 19) 00:10:12.061 12809.309 - 12868.887: 94.8892% ( 16) 00:10:12.061 12868.887 - 12928.465: 95.0354% ( 19) 00:10:12.061 12928.465 - 12988.044: 95.1817% ( 19) 00:10:12.061 12988.044 - 13047.622: 95.3433% ( 21) 00:10:12.061 13047.622 - 13107.200: 95.5049% ( 21) 00:10:12.061 13107.200 - 13166.778: 95.6512% ( 19) 00:10:12.061 13166.778 - 13226.356: 95.8205% ( 22) 00:10:12.061 13226.356 - 13285.935: 95.9898% ( 22) 00:10:12.061 13285.935 - 13345.513: 96.1823% ( 25) 00:10:12.061 13345.513 - 13405.091: 96.3593% ( 23) 00:10:12.061 13405.091 - 13464.669: 96.5286% ( 22) 00:10:12.061 13464.669 - 13524.247: 96.6903% ( 21) 00:10:12.061 13524.247 - 13583.825: 96.7826% ( 12) 00:10:12.061 13583.825 - 13643.404: 96.9289% ( 19) 00:10:12.061 13643.404 - 13702.982: 97.0597% ( 17) 00:10:12.061 13702.982 - 13762.560: 97.1906% ( 17) 00:10:12.061 13762.560 - 13822.138: 97.3060% ( 15) 00:10:12.061 13822.138 - 13881.716: 97.4292% ( 16) 00:10:12.061 13881.716 - 13941.295: 97.5292% ( 13) 00:10:12.061 13941.295 - 14000.873: 97.6216% ( 12) 00:10:12.061 14000.873 - 14060.451: 97.6986% ( 10) 00:10:12.061 14060.451 - 14120.029: 97.7833% ( 11) 00:10:12.061 14120.029 - 14179.607: 97.8833% ( 13) 00:10:12.061 14179.607 - 14239.185: 97.9680% ( 11) 00:10:12.061 14239.185 - 14298.764: 98.0526% ( 11) 00:10:12.061 14298.764 - 14358.342: 98.1296% ( 10) 00:10:12.061 14358.342 - 14417.920: 98.1758% ( 6) 00:10:12.061 14417.920 - 14477.498: 98.1989% ( 3) 00:10:12.061 14477.498 - 14537.076: 98.2297% ( 4) 00:10:12.061 14537.076 - 14596.655: 98.2605% ( 4) 00:10:12.061 14596.655 - 14656.233: 98.2913% ( 4) 00:10:12.061 14656.233 - 14715.811: 98.3220% ( 4) 00:10:12.061 14715.811 - 14775.389: 98.3451% ( 3) 00:10:12.061 14775.389 - 14834.967: 98.3759% ( 4) 00:10:12.061 14834.967 - 14894.545: 98.4144% ( 5) 00:10:12.061 14894.545 - 14954.124: 98.4760% ( 8) 00:10:12.061 14954.124 - 15013.702: 98.5299% ( 7) 00:10:12.061 15013.702 - 15073.280: 98.5837% ( 7) 00:10:12.061 15073.280 - 15132.858: 98.6376% ( 7) 00:10:12.061 15132.858 - 15192.436: 98.6607% ( 3) 00:10:12.061 15192.436 - 15252.015: 98.6915% ( 4) 00:10:12.061 15252.015 - 15371.171: 98.7377% ( 6) 00:10:12.061 15371.171 - 15490.327: 98.7839% ( 6) 00:10:12.061 15490.327 - 15609.484: 98.8300% ( 6) 00:10:12.061 15609.484 - 15728.640: 98.8839% ( 7) 00:10:12.061 15728.640 - 15847.796: 98.9301% ( 6) 00:10:12.061 15847.796 - 15966.953: 98.9840% ( 7) 00:10:12.061 15966.953 - 16086.109: 99.0148% ( 4) 00:10:12.061 24546.211 - 24665.367: 99.0225% ( 1) 00:10:12.061 24665.367 - 24784.524: 99.0687% ( 6) 00:10:12.061 24784.524 - 24903.680: 99.1148% ( 6) 00:10:12.061 24903.680 - 25022.836: 99.1456% ( 4) 00:10:12.061 25022.836 - 25141.993: 99.1918% ( 6) 00:10:12.061 25141.993 - 25261.149: 99.2380% ( 6) 00:10:12.061 25261.149 - 25380.305: 99.2765% ( 5) 00:10:12.061 25380.305 - 25499.462: 99.3227% ( 6) 00:10:12.061 25499.462 - 25618.618: 99.3688% ( 6) 00:10:12.061 25618.618 - 25737.775: 99.4073% ( 5) 00:10:12.061 25737.775 - 25856.931: 99.4535% ( 6) 00:10:12.061 25856.931 - 25976.087: 99.4920% ( 5) 00:10:12.061 25976.087 - 26095.244: 99.5074% ( 2) 00:10:12.061 30742.342 - 30980.655: 99.5151% ( 1) 00:10:12.061 30980.655 - 31218.967: 99.5921% ( 10) 00:10:12.061 31218.967 - 31457.280: 99.6767% ( 11) 00:10:12.061 31457.280 - 31695.593: 99.7614% ( 11) 00:10:12.061 31695.593 - 31933.905: 99.8461% ( 11) 00:10:12.061 31933.905 - 32172.218: 99.9307% ( 11) 00:10:12.061 32172.218 - 32410.531: 100.0000% ( 9) 00:10:12.061 00:10:12.061 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:12.061 ============================================================================== 00:10:12.061 Range in us Cumulative IO count 00:10:12.061 5064.145 - 5093.935: 0.0154% ( 2) 00:10:12.061 5093.935 - 5123.724: 0.0385% ( 3) 00:10:12.061 5123.724 - 5153.513: 0.0462% ( 1) 00:10:12.061 5153.513 - 5183.302: 0.0616% ( 2) 00:10:12.061 5183.302 - 5213.091: 0.0770% ( 2) 00:10:12.061 5213.091 - 5242.880: 0.1001% ( 3) 00:10:12.061 5242.880 - 5272.669: 0.1155% ( 2) 00:10:12.061 5272.669 - 5302.458: 0.1308% ( 2) 00:10:12.061 5302.458 - 5332.247: 0.1462% ( 2) 00:10:12.061 5332.247 - 5362.036: 0.1616% ( 2) 00:10:12.061 5362.036 - 5391.825: 0.1770% ( 2) 00:10:12.061 5391.825 - 5421.615: 0.1847% ( 1) 00:10:12.061 5421.615 - 5451.404: 0.2001% ( 2) 00:10:12.061 5451.404 - 5481.193: 0.2078% ( 1) 00:10:12.061 5481.193 - 5510.982: 0.2232% ( 2) 00:10:12.061 5510.982 - 5540.771: 0.2386% ( 2) 00:10:12.061 5540.771 - 5570.560: 0.2617% ( 3) 00:10:12.061 5570.560 - 5600.349: 0.2771% ( 2) 00:10:12.061 5600.349 - 5630.138: 0.2925% ( 2) 00:10:12.061 5630.138 - 5659.927: 0.3079% ( 2) 00:10:12.061 5659.927 - 5689.716: 0.3233% ( 2) 00:10:12.061 5689.716 - 5719.505: 0.3387% ( 2) 00:10:12.061 5719.505 - 5749.295: 0.3618% ( 3) 00:10:12.061 5749.295 - 5779.084: 0.3772% ( 2) 00:10:12.061 5779.084 - 5808.873: 0.3849% ( 1) 00:10:12.061 5808.873 - 5838.662: 0.3925% ( 1) 00:10:12.061 5838.662 - 5868.451: 0.4079% ( 2) 00:10:12.061 5868.451 - 5898.240: 0.4233% ( 2) 00:10:12.061 5898.240 - 5928.029: 0.4387% ( 2) 00:10:12.061 5928.029 - 5957.818: 0.4464% ( 1) 00:10:12.062 5957.818 - 5987.607: 0.4618% ( 2) 00:10:12.062 5987.607 - 6017.396: 0.4772% ( 2) 00:10:12.062 6017.396 - 6047.185: 0.4926% ( 2) 00:10:12.062 7506.851 - 7536.640: 0.5157% ( 3) 00:10:12.062 7536.640 - 7566.429: 0.5234% ( 1) 00:10:12.062 7566.429 - 7596.218: 0.5465% ( 3) 00:10:12.062 7596.218 - 7626.007: 0.5619% ( 2) 00:10:12.062 7626.007 - 7685.585: 0.5927% ( 4) 00:10:12.062 7685.585 - 7745.164: 0.6312% ( 5) 00:10:12.062 7745.164 - 7804.742: 0.6619% ( 4) 00:10:12.062 7804.742 - 7864.320: 0.6927% ( 4) 00:10:12.062 7864.320 - 7923.898: 0.7235% ( 4) 00:10:12.062 7923.898 - 7983.476: 0.7543% ( 4) 00:10:12.062 7983.476 - 8043.055: 0.7851% ( 4) 00:10:12.062 8043.055 - 8102.633: 0.8159% ( 4) 00:10:12.062 8102.633 - 8162.211: 0.8467% ( 4) 00:10:12.062 8162.211 - 8221.789: 0.8929% ( 6) 00:10:12.062 8221.789 - 8281.367: 1.0314% ( 18) 00:10:12.062 8281.367 - 8340.945: 1.3085% ( 36) 00:10:12.062 8340.945 - 8400.524: 1.8781% ( 74) 00:10:12.062 8400.524 - 8460.102: 2.6401% ( 99) 00:10:12.062 8460.102 - 8519.680: 3.8793% ( 161) 00:10:12.062 8519.680 - 8579.258: 5.5958% ( 223) 00:10:12.062 8579.258 - 8638.836: 7.8818% ( 297) 00:10:12.062 8638.836 - 8698.415: 10.7374% ( 371) 00:10:12.062 8698.415 - 8757.993: 14.0317% ( 428) 00:10:12.062 8757.993 - 8817.571: 17.7417% ( 482) 00:10:12.062 8817.571 - 8877.149: 21.5132% ( 490) 00:10:12.062 8877.149 - 8936.727: 25.3233% ( 495) 00:10:12.062 8936.727 - 8996.305: 29.2334% ( 508) 00:10:12.062 8996.305 - 9055.884: 33.1512% ( 509) 00:10:12.062 9055.884 - 9115.462: 36.9535% ( 494) 00:10:12.062 9115.462 - 9175.040: 40.8328% ( 504) 00:10:12.062 9175.040 - 9234.618: 44.6506% ( 496) 00:10:12.062 9234.618 - 9294.196: 48.3143% ( 476) 00:10:12.062 9294.196 - 9353.775: 51.8319% ( 457) 00:10:12.062 9353.775 - 9413.353: 55.1339% ( 429) 00:10:12.062 9413.353 - 9472.931: 58.0280% ( 376) 00:10:12.062 9472.931 - 9532.509: 60.7913% ( 359) 00:10:12.062 9532.509 - 9592.087: 63.0850% ( 298) 00:10:12.062 9592.087 - 9651.665: 64.9400% ( 241) 00:10:12.062 9651.665 - 9711.244: 66.6256% ( 219) 00:10:12.062 9711.244 - 9770.822: 67.9957% ( 178) 00:10:12.062 9770.822 - 9830.400: 69.2811% ( 167) 00:10:12.062 9830.400 - 9889.978: 70.4895% ( 157) 00:10:12.062 9889.978 - 9949.556: 71.6441% ( 150) 00:10:12.062 9949.556 - 10009.135: 72.6524% ( 131) 00:10:12.062 10009.135 - 10068.713: 73.6376% ( 128) 00:10:12.062 10068.713 - 10128.291: 74.6459% ( 131) 00:10:12.062 10128.291 - 10187.869: 75.7389% ( 142) 00:10:12.062 10187.869 - 10247.447: 76.8473% ( 144) 00:10:12.062 10247.447 - 10307.025: 78.0095% ( 151) 00:10:12.062 10307.025 - 10366.604: 79.2180% ( 157) 00:10:12.062 10366.604 - 10426.182: 80.3110% ( 142) 00:10:12.062 10426.182 - 10485.760: 81.4347% ( 146) 00:10:12.062 10485.760 - 10545.338: 82.5277% ( 142) 00:10:12.062 10545.338 - 10604.916: 83.5976% ( 139) 00:10:12.062 10604.916 - 10664.495: 84.5751% ( 127) 00:10:12.062 10664.495 - 10724.073: 85.5757% ( 130) 00:10:12.062 10724.073 - 10783.651: 86.4994% ( 120) 00:10:12.062 10783.651 - 10843.229: 87.3076% ( 105) 00:10:12.062 10843.229 - 10902.807: 88.0080% ( 91) 00:10:12.062 10902.807 - 10962.385: 88.6084% ( 78) 00:10:12.062 10962.385 - 11021.964: 89.0779% ( 61) 00:10:12.062 11021.964 - 11081.542: 89.5551% ( 62) 00:10:12.062 11081.542 - 11141.120: 89.9631% ( 53) 00:10:12.062 11141.120 - 11200.698: 90.3325% ( 48) 00:10:12.062 11200.698 - 11260.276: 90.6712% ( 44) 00:10:12.062 11260.276 - 11319.855: 91.0022% ( 43) 00:10:12.062 11319.855 - 11379.433: 91.2792% ( 36) 00:10:12.062 11379.433 - 11439.011: 91.5486% ( 35) 00:10:12.062 11439.011 - 11498.589: 91.8103% ( 34) 00:10:12.062 11498.589 - 11558.167: 92.0490% ( 31) 00:10:12.062 11558.167 - 11617.745: 92.2799% ( 30) 00:10:12.062 11617.745 - 11677.324: 92.5031% ( 29) 00:10:12.062 11677.324 - 11736.902: 92.7109% ( 27) 00:10:12.062 11736.902 - 11796.480: 92.9341% ( 29) 00:10:12.062 11796.480 - 11856.058: 93.1342% ( 26) 00:10:12.062 11856.058 - 11915.636: 93.3498% ( 28) 00:10:12.062 11915.636 - 11975.215: 93.5576% ( 27) 00:10:12.062 11975.215 - 12034.793: 93.7500% ( 25) 00:10:12.062 12034.793 - 12094.371: 93.9039% ( 20) 00:10:12.062 12094.371 - 12153.949: 94.0194% ( 15) 00:10:12.062 12153.949 - 12213.527: 94.1272% ( 14) 00:10:12.062 12213.527 - 12273.105: 94.2118% ( 11) 00:10:12.062 12273.105 - 12332.684: 94.3196% ( 14) 00:10:12.062 12332.684 - 12392.262: 94.4042% ( 11) 00:10:12.062 12392.262 - 12451.840: 94.5043% ( 13) 00:10:12.062 12451.840 - 12511.418: 94.6044% ( 13) 00:10:12.062 12511.418 - 12570.996: 94.6967% ( 12) 00:10:12.062 12570.996 - 12630.575: 94.7737% ( 10) 00:10:12.062 12630.575 - 12690.153: 94.8507% ( 10) 00:10:12.062 12690.153 - 12749.731: 94.9276% ( 10) 00:10:12.062 12749.731 - 12809.309: 95.0277% ( 13) 00:10:12.062 12809.309 - 12868.887: 95.1201% ( 12) 00:10:12.062 12868.887 - 12928.465: 95.2201% ( 13) 00:10:12.062 12928.465 - 12988.044: 95.3356% ( 15) 00:10:12.062 12988.044 - 13047.622: 95.4664% ( 17) 00:10:12.062 13047.622 - 13107.200: 95.5896% ( 16) 00:10:12.062 13107.200 - 13166.778: 95.7050% ( 15) 00:10:12.062 13166.778 - 13226.356: 95.8205% ( 15) 00:10:12.062 13226.356 - 13285.935: 95.9514% ( 17) 00:10:12.062 13285.935 - 13345.513: 96.0745% ( 16) 00:10:12.062 13345.513 - 13405.091: 96.1900% ( 15) 00:10:12.062 13405.091 - 13464.669: 96.3208% ( 17) 00:10:12.062 13464.669 - 13524.247: 96.4440% ( 16) 00:10:12.062 13524.247 - 13583.825: 96.5517% ( 14) 00:10:12.062 13583.825 - 13643.404: 96.6441% ( 12) 00:10:12.062 13643.404 - 13702.982: 96.7518% ( 14) 00:10:12.062 13702.982 - 13762.560: 96.8981% ( 19) 00:10:12.062 13762.560 - 13822.138: 97.0135% ( 15) 00:10:12.062 13822.138 - 13881.716: 97.1136% ( 13) 00:10:12.062 13881.716 - 13941.295: 97.2137% ( 13) 00:10:12.062 13941.295 - 14000.873: 97.2983% ( 11) 00:10:12.062 14000.873 - 14060.451: 97.3907% ( 12) 00:10:12.062 14060.451 - 14120.029: 97.5062% ( 15) 00:10:12.062 14120.029 - 14179.607: 97.6216% ( 15) 00:10:12.062 14179.607 - 14239.185: 97.7217% ( 13) 00:10:12.062 14239.185 - 14298.764: 97.8294% ( 14) 00:10:12.062 14298.764 - 14358.342: 97.9526% ( 16) 00:10:12.062 14358.342 - 14417.920: 98.0296% ( 10) 00:10:12.062 14417.920 - 14477.498: 98.0834% ( 7) 00:10:12.062 14477.498 - 14537.076: 98.1450% ( 8) 00:10:12.062 14537.076 - 14596.655: 98.1989% ( 7) 00:10:12.062 14596.655 - 14656.233: 98.2528% ( 7) 00:10:12.062 14656.233 - 14715.811: 98.3143% ( 8) 00:10:12.062 14715.811 - 14775.389: 98.3682% ( 7) 00:10:12.062 14775.389 - 14834.967: 98.4452% ( 10) 00:10:12.062 14834.967 - 14894.545: 98.5145% ( 9) 00:10:12.062 14894.545 - 14954.124: 98.5607% ( 6) 00:10:12.062 14954.124 - 15013.702: 98.6145% ( 7) 00:10:12.062 15013.702 - 15073.280: 98.6453% ( 4) 00:10:12.062 15073.280 - 15132.858: 98.6684% ( 3) 00:10:12.062 15132.858 - 15192.436: 98.6992% ( 4) 00:10:12.062 15192.436 - 15252.015: 98.7223% ( 3) 00:10:12.062 15252.015 - 15371.171: 98.7762% ( 7) 00:10:12.062 15371.171 - 15490.327: 98.8070% ( 4) 00:10:12.062 15490.327 - 15609.484: 98.8608% ( 7) 00:10:12.062 15609.484 - 15728.640: 98.9147% ( 7) 00:10:12.062 15728.640 - 15847.796: 98.9609% ( 6) 00:10:12.062 15847.796 - 15966.953: 99.0071% ( 6) 00:10:12.062 15966.953 - 16086.109: 99.0148% ( 1) 00:10:12.062 23831.273 - 23950.429: 99.0456% ( 4) 00:10:12.062 23950.429 - 24069.585: 99.0917% ( 6) 00:10:12.062 24069.585 - 24188.742: 99.1302% ( 5) 00:10:12.062 24188.742 - 24307.898: 99.1764% ( 6) 00:10:12.062 24307.898 - 24427.055: 99.2226% ( 6) 00:10:12.062 24427.055 - 24546.211: 99.2688% ( 6) 00:10:12.062 24546.211 - 24665.367: 99.2996% ( 4) 00:10:12.062 24665.367 - 24784.524: 99.3458% ( 6) 00:10:12.062 24784.524 - 24903.680: 99.3842% ( 5) 00:10:12.062 24903.680 - 25022.836: 99.4304% ( 6) 00:10:12.062 25022.836 - 25141.993: 99.4766% ( 6) 00:10:12.062 25141.993 - 25261.149: 99.5074% ( 4) 00:10:12.062 30146.560 - 30265.716: 99.5382% ( 4) 00:10:12.062 30265.716 - 30384.873: 99.5690% ( 4) 00:10:12.062 30384.873 - 30504.029: 99.5998% ( 4) 00:10:12.062 30504.029 - 30742.342: 99.6767% ( 10) 00:10:12.062 30742.342 - 30980.655: 99.7537% ( 10) 00:10:12.062 30980.655 - 31218.967: 99.8461% ( 12) 00:10:12.062 31218.967 - 31457.280: 99.9307% ( 11) 00:10:12.062 31457.280 - 31695.593: 100.0000% ( 9) 00:10:12.062 00:10:12.062 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:12.062 ============================================================================== 00:10:12.062 Range in us Cumulative IO count 00:10:12.062 4796.044 - 4825.833: 0.0154% ( 2) 00:10:12.062 4825.833 - 4855.622: 0.0308% ( 2) 00:10:12.062 4855.622 - 4885.411: 0.0462% ( 2) 00:10:12.062 4885.411 - 4915.200: 0.0616% ( 2) 00:10:12.062 4915.200 - 4944.989: 0.0847% ( 3) 00:10:12.062 4944.989 - 4974.778: 0.1001% ( 2) 00:10:12.062 4974.778 - 5004.567: 0.1155% ( 2) 00:10:12.062 5004.567 - 5034.356: 0.1308% ( 2) 00:10:12.062 5034.356 - 5064.145: 0.1539% ( 3) 00:10:12.062 5064.145 - 5093.935: 0.1693% ( 2) 00:10:12.062 5093.935 - 5123.724: 0.1847% ( 2) 00:10:12.062 5123.724 - 5153.513: 0.2001% ( 2) 00:10:12.062 5153.513 - 5183.302: 0.2155% ( 2) 00:10:12.062 5183.302 - 5213.091: 0.2309% ( 2) 00:10:12.062 5213.091 - 5242.880: 0.2463% ( 2) 00:10:12.062 5242.880 - 5272.669: 0.2694% ( 3) 00:10:12.062 5272.669 - 5302.458: 0.2771% ( 1) 00:10:12.063 5302.458 - 5332.247: 0.2848% ( 1) 00:10:12.063 5332.247 - 5362.036: 0.3079% ( 3) 00:10:12.063 5362.036 - 5391.825: 0.3233% ( 2) 00:10:12.063 5391.825 - 5421.615: 0.3387% ( 2) 00:10:12.063 5421.615 - 5451.404: 0.3541% ( 2) 00:10:12.063 5451.404 - 5481.193: 0.3695% ( 2) 00:10:12.063 5481.193 - 5510.982: 0.3925% ( 3) 00:10:12.063 5510.982 - 5540.771: 0.4079% ( 2) 00:10:12.063 5540.771 - 5570.560: 0.4156% ( 1) 00:10:12.063 5570.560 - 5600.349: 0.4310% ( 2) 00:10:12.063 5600.349 - 5630.138: 0.4464% ( 2) 00:10:12.063 5630.138 - 5659.927: 0.4618% ( 2) 00:10:12.063 5659.927 - 5689.716: 0.4772% ( 2) 00:10:12.063 5689.716 - 5719.505: 0.4926% ( 2) 00:10:12.063 7119.593 - 7149.382: 0.5080% ( 2) 00:10:12.063 7149.382 - 7179.171: 0.5234% ( 2) 00:10:12.063 7179.171 - 7208.960: 0.5311% ( 1) 00:10:12.063 7208.960 - 7238.749: 0.5542% ( 3) 00:10:12.063 7238.749 - 7268.538: 0.5696% ( 2) 00:10:12.063 7268.538 - 7298.327: 0.5850% ( 2) 00:10:12.063 7298.327 - 7328.116: 0.6081% ( 3) 00:10:12.063 7328.116 - 7357.905: 0.6235% ( 2) 00:10:12.063 7357.905 - 7387.695: 0.6389% ( 2) 00:10:12.063 7387.695 - 7417.484: 0.6542% ( 2) 00:10:12.063 7417.484 - 7447.273: 0.6773% ( 3) 00:10:12.063 7447.273 - 7477.062: 0.6850% ( 1) 00:10:12.063 7477.062 - 7506.851: 0.7004% ( 2) 00:10:12.063 7506.851 - 7536.640: 0.7081% ( 1) 00:10:12.063 7536.640 - 7566.429: 0.7235% ( 2) 00:10:12.063 7566.429 - 7596.218: 0.7389% ( 2) 00:10:12.063 7596.218 - 7626.007: 0.7543% ( 2) 00:10:12.063 7626.007 - 7685.585: 0.7851% ( 4) 00:10:12.063 7685.585 - 7745.164: 0.8159% ( 4) 00:10:12.063 7745.164 - 7804.742: 0.8544% ( 5) 00:10:12.063 7804.742 - 7864.320: 0.8775% ( 3) 00:10:12.063 7864.320 - 7923.898: 0.9083% ( 4) 00:10:12.063 7923.898 - 7983.476: 0.9390% ( 4) 00:10:12.063 7983.476 - 8043.055: 0.9544% ( 2) 00:10:12.063 8043.055 - 8102.633: 0.9852% ( 4) 00:10:12.063 8102.633 - 8162.211: 1.0006% ( 2) 00:10:12.063 8162.211 - 8221.789: 1.0314% ( 4) 00:10:12.063 8221.789 - 8281.367: 1.1238% ( 12) 00:10:12.063 8281.367 - 8340.945: 1.3547% ( 30) 00:10:12.063 8340.945 - 8400.524: 1.7549% ( 52) 00:10:12.063 8400.524 - 8460.102: 2.4554% ( 91) 00:10:12.063 8460.102 - 8519.680: 3.6561% ( 156) 00:10:12.063 8519.680 - 8579.258: 5.4726% ( 236) 00:10:12.063 8579.258 - 8638.836: 7.7894% ( 301) 00:10:12.063 8638.836 - 8698.415: 10.7836% ( 389) 00:10:12.063 8698.415 - 8757.993: 14.0702% ( 427) 00:10:12.063 8757.993 - 8817.571: 17.5647% ( 454) 00:10:12.063 8817.571 - 8877.149: 21.3362% ( 490) 00:10:12.063 8877.149 - 8936.727: 25.0616% ( 484) 00:10:12.063 8936.727 - 8996.305: 28.9101% ( 500) 00:10:12.063 8996.305 - 9055.884: 32.7509% ( 499) 00:10:12.063 9055.884 - 9115.462: 36.6225% ( 503) 00:10:12.063 9115.462 - 9175.040: 40.4249% ( 494) 00:10:12.063 9175.040 - 9234.618: 44.2272% ( 494) 00:10:12.063 9234.618 - 9294.196: 47.8910% ( 476) 00:10:12.063 9294.196 - 9353.775: 51.4163% ( 458) 00:10:12.063 9353.775 - 9413.353: 54.8953% ( 452) 00:10:12.063 9413.353 - 9472.931: 57.9126% ( 392) 00:10:12.063 9472.931 - 9532.509: 60.6219% ( 352) 00:10:12.063 9532.509 - 9592.087: 62.9387% ( 301) 00:10:12.063 9592.087 - 9651.665: 64.9246% ( 258) 00:10:12.063 9651.665 - 9711.244: 66.4794% ( 202) 00:10:12.063 9711.244 - 9770.822: 67.9033% ( 185) 00:10:12.063 9770.822 - 9830.400: 69.1810% ( 166) 00:10:12.063 9830.400 - 9889.978: 70.3972% ( 158) 00:10:12.063 9889.978 - 9949.556: 71.5671% ( 152) 00:10:12.063 9949.556 - 10009.135: 72.5523% ( 128) 00:10:12.063 10009.135 - 10068.713: 73.5683% ( 132) 00:10:12.063 10068.713 - 10128.291: 74.5536% ( 128) 00:10:12.063 10128.291 - 10187.869: 75.6312% ( 140) 00:10:12.063 10187.869 - 10247.447: 76.7010% ( 139) 00:10:12.063 10247.447 - 10307.025: 77.7863% ( 141) 00:10:12.063 10307.025 - 10366.604: 78.9101% ( 146) 00:10:12.063 10366.604 - 10426.182: 80.0185% ( 144) 00:10:12.063 10426.182 - 10485.760: 81.1653% ( 149) 00:10:12.063 10485.760 - 10545.338: 82.3122% ( 149) 00:10:12.063 10545.338 - 10604.916: 83.3898% ( 140) 00:10:12.063 10604.916 - 10664.495: 84.4828% ( 142) 00:10:12.063 10664.495 - 10724.073: 85.5373% ( 137) 00:10:12.063 10724.073 - 10783.651: 86.4455% ( 118) 00:10:12.063 10783.651 - 10843.229: 87.2768% ( 108) 00:10:12.063 10843.229 - 10902.807: 88.0157% ( 96) 00:10:12.063 10902.807 - 10962.385: 88.5930% ( 75) 00:10:12.063 10962.385 - 11021.964: 89.2010% ( 79) 00:10:12.063 11021.964 - 11081.542: 89.6244% ( 55) 00:10:12.063 11081.542 - 11141.120: 90.0400% ( 54) 00:10:12.063 11141.120 - 11200.698: 90.4249% ( 50) 00:10:12.063 11200.698 - 11260.276: 90.7712% ( 45) 00:10:12.063 11260.276 - 11319.855: 91.0945% ( 42) 00:10:12.063 11319.855 - 11379.433: 91.3947% ( 39) 00:10:12.063 11379.433 - 11439.011: 91.6795% ( 37) 00:10:12.063 11439.011 - 11498.589: 91.9489% ( 35) 00:10:12.063 11498.589 - 11558.167: 92.1567% ( 27) 00:10:12.063 11558.167 - 11617.745: 92.3722% ( 28) 00:10:12.063 11617.745 - 11677.324: 92.6262% ( 33) 00:10:12.063 11677.324 - 11736.902: 92.8494% ( 29) 00:10:12.063 11736.902 - 11796.480: 93.0573% ( 27) 00:10:12.063 11796.480 - 11856.058: 93.2574% ( 26) 00:10:12.063 11856.058 - 11915.636: 93.4344% ( 23) 00:10:12.063 11915.636 - 11975.215: 93.6422% ( 27) 00:10:12.063 11975.215 - 12034.793: 93.8347% ( 25) 00:10:12.063 12034.793 - 12094.371: 94.0271% ( 25) 00:10:12.063 12094.371 - 12153.949: 94.2118% ( 24) 00:10:12.063 12153.949 - 12213.527: 94.3889% ( 23) 00:10:12.063 12213.527 - 12273.105: 94.4966% ( 14) 00:10:12.063 12273.105 - 12332.684: 94.6275% ( 17) 00:10:12.063 12332.684 - 12392.262: 94.7429% ( 15) 00:10:12.063 12392.262 - 12451.840: 94.8199% ( 10) 00:10:12.063 12451.840 - 12511.418: 94.9200% ( 13) 00:10:12.063 12511.418 - 12570.996: 95.0431% ( 16) 00:10:12.063 12570.996 - 12630.575: 95.1432% ( 13) 00:10:12.063 12630.575 - 12690.153: 95.2509% ( 14) 00:10:12.063 12690.153 - 12749.731: 95.3510% ( 13) 00:10:12.063 12749.731 - 12809.309: 95.4433% ( 12) 00:10:12.063 12809.309 - 12868.887: 95.5280% ( 11) 00:10:12.063 12868.887 - 12928.465: 95.6281% ( 13) 00:10:12.063 12928.465 - 12988.044: 95.7204% ( 12) 00:10:12.063 12988.044 - 13047.622: 95.8128% ( 12) 00:10:12.063 13047.622 - 13107.200: 95.8667% ( 7) 00:10:12.063 13107.200 - 13166.778: 95.9360% ( 9) 00:10:12.063 13166.778 - 13226.356: 95.9898% ( 7) 00:10:12.063 13226.356 - 13285.935: 96.0437% ( 7) 00:10:12.063 13285.935 - 13345.513: 96.1207% ( 10) 00:10:12.063 13345.513 - 13405.091: 96.1977% ( 10) 00:10:12.063 13405.091 - 13464.669: 96.2669% ( 9) 00:10:12.063 13464.669 - 13524.247: 96.3439% ( 10) 00:10:12.063 13524.247 - 13583.825: 96.4132% ( 9) 00:10:12.063 13583.825 - 13643.404: 96.4825% ( 9) 00:10:12.063 13643.404 - 13702.982: 96.5440% ( 8) 00:10:12.063 13702.982 - 13762.560: 96.6287% ( 11) 00:10:12.063 13762.560 - 13822.138: 96.7211% ( 12) 00:10:12.063 13822.138 - 13881.716: 96.8134% ( 12) 00:10:12.063 13881.716 - 13941.295: 96.9058% ( 12) 00:10:12.063 13941.295 - 14000.873: 97.0058% ( 13) 00:10:12.063 14000.873 - 14060.451: 97.1213% ( 15) 00:10:12.063 14060.451 - 14120.029: 97.2522% ( 17) 00:10:12.063 14120.029 - 14179.607: 97.3676% ( 15) 00:10:12.063 14179.607 - 14239.185: 97.4831% ( 15) 00:10:12.063 14239.185 - 14298.764: 97.6139% ( 17) 00:10:12.063 14298.764 - 14358.342: 97.7756% ( 21) 00:10:12.063 14358.342 - 14417.920: 97.9218% ( 19) 00:10:12.063 14417.920 - 14477.498: 98.0603% ( 18) 00:10:12.063 14477.498 - 14537.076: 98.1835% ( 16) 00:10:12.063 14537.076 - 14596.655: 98.2913% ( 14) 00:10:12.063 14596.655 - 14656.233: 98.3836% ( 12) 00:10:12.063 14656.233 - 14715.811: 98.5068% ( 16) 00:10:12.063 14715.811 - 14775.389: 98.5991% ( 12) 00:10:12.063 14775.389 - 14834.967: 98.6838% ( 11) 00:10:12.063 14834.967 - 14894.545: 98.7531% ( 9) 00:10:12.063 14894.545 - 14954.124: 98.8070% ( 7) 00:10:12.063 14954.124 - 15013.702: 98.8454% ( 5) 00:10:12.063 15013.702 - 15073.280: 98.8685% ( 3) 00:10:12.063 15073.280 - 15132.858: 98.8993% ( 4) 00:10:12.063 15132.858 - 15192.436: 98.9224% ( 3) 00:10:12.063 15192.436 - 15252.015: 98.9455% ( 3) 00:10:12.063 15252.015 - 15371.171: 98.9994% ( 7) 00:10:12.063 15371.171 - 15490.327: 99.0148% ( 2) 00:10:12.063 22997.178 - 23116.335: 99.0379% ( 3) 00:10:12.063 23116.335 - 23235.491: 99.0764% ( 5) 00:10:12.063 23235.491 - 23354.647: 99.1148% ( 5) 00:10:12.063 23354.647 - 23473.804: 99.1610% ( 6) 00:10:12.063 23473.804 - 23592.960: 99.2072% ( 6) 00:10:12.063 23592.960 - 23712.116: 99.2457% ( 5) 00:10:12.063 23712.116 - 23831.273: 99.2919% ( 6) 00:10:12.063 23831.273 - 23950.429: 99.3304% ( 5) 00:10:12.063 23950.429 - 24069.585: 99.3765% ( 6) 00:10:12.063 24069.585 - 24188.742: 99.4227% ( 6) 00:10:12.063 24188.742 - 24307.898: 99.4689% ( 6) 00:10:12.063 24307.898 - 24427.055: 99.4997% ( 4) 00:10:12.063 24427.055 - 24546.211: 99.5074% ( 1) 00:10:12.063 29312.465 - 29431.622: 99.5151% ( 1) 00:10:12.063 29431.622 - 29550.778: 99.5536% ( 5) 00:10:12.063 29550.778 - 29669.935: 99.5921% ( 5) 00:10:12.063 29669.935 - 29789.091: 99.6305% ( 5) 00:10:12.063 29789.091 - 29908.247: 99.6767% ( 6) 00:10:12.063 29908.247 - 30027.404: 99.7152% ( 5) 00:10:12.063 30027.404 - 30146.560: 99.7537% ( 5) 00:10:12.063 30146.560 - 30265.716: 99.7922% ( 5) 00:10:12.063 30265.716 - 30384.873: 99.8384% ( 6) 00:10:12.063 30384.873 - 30504.029: 99.8768% ( 5) 00:10:12.063 30504.029 - 30742.342: 99.9615% ( 11) 00:10:12.064 30742.342 - 30980.655: 100.0000% ( 5) 00:10:12.064 00:10:12.064 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:12.064 ============================================================================== 00:10:12.064 Range in us Cumulative IO count 00:10:12.064 4408.785 - 4438.575: 0.0154% ( 2) 00:10:12.064 4438.575 - 4468.364: 0.0308% ( 2) 00:10:12.064 4468.364 - 4498.153: 0.0462% ( 2) 00:10:12.064 4498.153 - 4527.942: 0.0616% ( 2) 00:10:12.064 4527.942 - 4557.731: 0.0693% ( 1) 00:10:12.064 4557.731 - 4587.520: 0.0847% ( 2) 00:10:12.064 4587.520 - 4617.309: 0.1001% ( 2) 00:10:12.064 4617.309 - 4647.098: 0.1155% ( 2) 00:10:12.064 4647.098 - 4676.887: 0.1308% ( 2) 00:10:12.064 4676.887 - 4706.676: 0.1462% ( 2) 00:10:12.064 4706.676 - 4736.465: 0.1693% ( 3) 00:10:12.064 4736.465 - 4766.255: 0.1847% ( 2) 00:10:12.064 4766.255 - 4796.044: 0.2001% ( 2) 00:10:12.064 4796.044 - 4825.833: 0.2155% ( 2) 00:10:12.064 4825.833 - 4855.622: 0.2309% ( 2) 00:10:12.064 4855.622 - 4885.411: 0.2463% ( 2) 00:10:12.064 4885.411 - 4915.200: 0.2617% ( 2) 00:10:12.064 4915.200 - 4944.989: 0.2771% ( 2) 00:10:12.064 4944.989 - 4974.778: 0.2925% ( 2) 00:10:12.064 4974.778 - 5004.567: 0.3079% ( 2) 00:10:12.064 5004.567 - 5034.356: 0.3233% ( 2) 00:10:12.064 5034.356 - 5064.145: 0.3387% ( 2) 00:10:12.064 5064.145 - 5093.935: 0.3541% ( 2) 00:10:12.064 5093.935 - 5123.724: 0.3695% ( 2) 00:10:12.064 5123.724 - 5153.513: 0.3925% ( 3) 00:10:12.064 5153.513 - 5183.302: 0.4079% ( 2) 00:10:12.064 5183.302 - 5213.091: 0.4233% ( 2) 00:10:12.064 5213.091 - 5242.880: 0.4310% ( 1) 00:10:12.064 5242.880 - 5272.669: 0.4387% ( 1) 00:10:12.064 5272.669 - 5302.458: 0.4541% ( 2) 00:10:12.064 5302.458 - 5332.247: 0.4618% ( 1) 00:10:12.064 5332.247 - 5362.036: 0.4772% ( 2) 00:10:12.064 5362.036 - 5391.825: 0.4849% ( 1) 00:10:12.064 5391.825 - 5421.615: 0.4926% ( 1) 00:10:12.064 6791.913 - 6821.702: 0.5080% ( 2) 00:10:12.064 6821.702 - 6851.491: 0.5234% ( 2) 00:10:12.064 6851.491 - 6881.280: 0.5388% ( 2) 00:10:12.064 6881.280 - 6911.069: 0.5619% ( 3) 00:10:12.064 6911.069 - 6940.858: 0.5773% ( 2) 00:10:12.064 6940.858 - 6970.647: 0.5927% ( 2) 00:10:12.064 6970.647 - 7000.436: 0.6081% ( 2) 00:10:12.064 7000.436 - 7030.225: 0.6235% ( 2) 00:10:12.064 7030.225 - 7060.015: 0.6389% ( 2) 00:10:12.064 7060.015 - 7089.804: 0.6542% ( 2) 00:10:12.064 7089.804 - 7119.593: 0.6696% ( 2) 00:10:12.064 7119.593 - 7149.382: 0.6850% ( 2) 00:10:12.064 7149.382 - 7179.171: 0.7081% ( 3) 00:10:12.064 7179.171 - 7208.960: 0.7235% ( 2) 00:10:12.064 7208.960 - 7238.749: 0.7312% ( 1) 00:10:12.064 7238.749 - 7268.538: 0.7466% ( 2) 00:10:12.064 7268.538 - 7298.327: 0.7620% ( 2) 00:10:12.064 7298.327 - 7328.116: 0.7774% ( 2) 00:10:12.064 7328.116 - 7357.905: 0.7928% ( 2) 00:10:12.064 7357.905 - 7387.695: 0.8082% ( 2) 00:10:12.064 7387.695 - 7417.484: 0.8236% ( 2) 00:10:12.064 7417.484 - 7447.273: 0.8467% ( 3) 00:10:12.064 7447.273 - 7477.062: 0.8621% ( 2) 00:10:12.064 7477.062 - 7506.851: 0.8698% ( 1) 00:10:12.064 7506.851 - 7536.640: 0.8852% ( 2) 00:10:12.064 7536.640 - 7566.429: 0.9006% ( 2) 00:10:12.064 7566.429 - 7596.218: 0.9083% ( 1) 00:10:12.064 7596.218 - 7626.007: 0.9236% ( 2) 00:10:12.064 7626.007 - 7685.585: 0.9544% ( 4) 00:10:12.064 7685.585 - 7745.164: 0.9852% ( 4) 00:10:12.064 8162.211 - 8221.789: 1.0391% ( 7) 00:10:12.064 8221.789 - 8281.367: 1.1546% ( 15) 00:10:12.064 8281.367 - 8340.945: 1.3239% ( 22) 00:10:12.064 8340.945 - 8400.524: 1.7395% ( 54) 00:10:12.064 8400.524 - 8460.102: 2.3014% ( 73) 00:10:12.064 8460.102 - 8519.680: 3.5022% ( 156) 00:10:12.064 8519.680 - 8579.258: 5.1878% ( 219) 00:10:12.064 8579.258 - 8638.836: 7.6278% ( 317) 00:10:12.064 8638.836 - 8698.415: 10.6373% ( 391) 00:10:12.064 8698.415 - 8757.993: 13.9547% ( 431) 00:10:12.064 8757.993 - 8817.571: 17.4415% ( 453) 00:10:12.064 8817.571 - 8877.149: 21.1746% ( 485) 00:10:12.064 8877.149 - 8936.727: 24.9615% ( 492) 00:10:12.064 8936.727 - 8996.305: 28.8639% ( 507) 00:10:12.064 8996.305 - 9055.884: 32.7971% ( 511) 00:10:12.064 9055.884 - 9115.462: 36.7688% ( 516) 00:10:12.064 9115.462 - 9175.040: 40.6250% ( 501) 00:10:12.064 9175.040 - 9234.618: 44.4735% ( 500) 00:10:12.064 9234.618 - 9294.196: 48.1527% ( 478) 00:10:12.064 9294.196 - 9353.775: 51.6241% ( 451) 00:10:12.064 9353.775 - 9413.353: 55.0339% ( 443) 00:10:12.064 9413.353 - 9472.931: 58.0588% ( 393) 00:10:12.064 9472.931 - 9532.509: 60.8067% ( 357) 00:10:12.064 9532.509 - 9592.087: 63.1081% ( 299) 00:10:12.064 9592.087 - 9651.665: 64.8938% ( 232) 00:10:12.064 9651.665 - 9711.244: 66.4871% ( 207) 00:10:12.064 9711.244 - 9770.822: 67.9726% ( 193) 00:10:12.064 9770.822 - 9830.400: 69.2349% ( 164) 00:10:12.064 9830.400 - 9889.978: 70.4049% ( 152) 00:10:12.064 9889.978 - 9949.556: 71.5671% ( 151) 00:10:12.064 9949.556 - 10009.135: 72.6216% ( 137) 00:10:12.064 10009.135 - 10068.713: 73.6376% ( 132) 00:10:12.064 10068.713 - 10128.291: 74.6690% ( 134) 00:10:12.064 10128.291 - 10187.869: 75.7158% ( 136) 00:10:12.064 10187.869 - 10247.447: 76.8011% ( 141) 00:10:12.064 10247.447 - 10307.025: 77.9095% ( 144) 00:10:12.064 10307.025 - 10366.604: 79.0102% ( 143) 00:10:12.064 10366.604 - 10426.182: 80.1108% ( 143) 00:10:12.064 10426.182 - 10485.760: 81.2038% ( 142) 00:10:12.064 10485.760 - 10545.338: 82.2968% ( 142) 00:10:12.064 10545.338 - 10604.916: 83.3744% ( 140) 00:10:12.064 10604.916 - 10664.495: 84.4366% ( 138) 00:10:12.064 10664.495 - 10724.073: 85.4064% ( 126) 00:10:12.064 10724.073 - 10783.651: 86.2531% ( 110) 00:10:12.064 10783.651 - 10843.229: 87.0228% ( 100) 00:10:12.064 10843.229 - 10902.807: 87.7694% ( 97) 00:10:12.064 10902.807 - 10962.385: 88.3775% ( 79) 00:10:12.064 10962.385 - 11021.964: 88.8624% ( 63) 00:10:12.064 11021.964 - 11081.542: 89.3242% ( 60) 00:10:12.064 11081.542 - 11141.120: 89.7629% ( 57) 00:10:12.064 11141.120 - 11200.698: 90.1170% ( 46) 00:10:12.064 11200.698 - 11260.276: 90.4634% ( 45) 00:10:12.064 11260.276 - 11319.855: 90.7635% ( 39) 00:10:12.064 11319.855 - 11379.433: 91.1099% ( 45) 00:10:12.064 11379.433 - 11439.011: 91.4486% ( 44) 00:10:12.064 11439.011 - 11498.589: 91.7411% ( 38) 00:10:12.064 11498.589 - 11558.167: 91.9797% ( 31) 00:10:12.064 11558.167 - 11617.745: 92.2106% ( 30) 00:10:12.064 11617.745 - 11677.324: 92.4492% ( 31) 00:10:12.064 11677.324 - 11736.902: 92.6647% ( 28) 00:10:12.064 11736.902 - 11796.480: 92.8648% ( 26) 00:10:12.064 11796.480 - 11856.058: 93.0958% ( 30) 00:10:12.064 11856.058 - 11915.636: 93.2882% ( 25) 00:10:12.064 11915.636 - 11975.215: 93.4883% ( 26) 00:10:12.064 11975.215 - 12034.793: 93.6576% ( 22) 00:10:12.064 12034.793 - 12094.371: 93.8424% ( 24) 00:10:12.064 12094.371 - 12153.949: 94.0194% ( 23) 00:10:12.064 12153.949 - 12213.527: 94.1656% ( 19) 00:10:12.064 12213.527 - 12273.105: 94.2734% ( 14) 00:10:12.064 12273.105 - 12332.684: 94.4196% ( 19) 00:10:12.064 12332.684 - 12392.262: 94.5274% ( 14) 00:10:12.064 12392.262 - 12451.840: 94.6583% ( 17) 00:10:12.064 12451.840 - 12511.418: 94.7737% ( 15) 00:10:12.064 12511.418 - 12570.996: 94.8815% ( 14) 00:10:12.064 12570.996 - 12630.575: 94.9969% ( 15) 00:10:12.064 12630.575 - 12690.153: 95.0816% ( 11) 00:10:12.064 12690.153 - 12749.731: 95.1509% ( 9) 00:10:12.064 12749.731 - 12809.309: 95.2509% ( 13) 00:10:12.064 12809.309 - 12868.887: 95.3202% ( 9) 00:10:12.064 12868.887 - 12928.465: 95.3972% ( 10) 00:10:12.064 12928.465 - 12988.044: 95.4818% ( 11) 00:10:12.064 12988.044 - 13047.622: 95.5742% ( 12) 00:10:12.064 13047.622 - 13107.200: 95.6897% ( 15) 00:10:12.064 13107.200 - 13166.778: 95.8051% ( 15) 00:10:12.064 13166.778 - 13226.356: 95.9052% ( 13) 00:10:12.064 13226.356 - 13285.935: 96.0129% ( 14) 00:10:12.064 13285.935 - 13345.513: 96.1130% ( 13) 00:10:12.064 13345.513 - 13405.091: 96.2208% ( 14) 00:10:12.064 13405.091 - 13464.669: 96.3439% ( 16) 00:10:12.064 13464.669 - 13524.247: 96.4209% ( 10) 00:10:12.064 13524.247 - 13583.825: 96.5286% ( 14) 00:10:12.064 13583.825 - 13643.404: 96.6595% ( 17) 00:10:12.064 13643.404 - 13702.982: 96.7826% ( 16) 00:10:12.064 13702.982 - 13762.560: 96.9289% ( 19) 00:10:12.064 13762.560 - 13822.138: 97.0674% ( 18) 00:10:12.064 13822.138 - 13881.716: 97.2214% ( 20) 00:10:12.064 13881.716 - 13941.295: 97.3753% ( 20) 00:10:12.064 13941.295 - 14000.873: 97.5369% ( 21) 00:10:12.064 14000.873 - 14060.451: 97.6601% ( 16) 00:10:12.064 14060.451 - 14120.029: 97.7756% ( 15) 00:10:12.064 14120.029 - 14179.607: 97.8987% ( 16) 00:10:12.064 14179.607 - 14239.185: 98.0065% ( 14) 00:10:12.064 14239.185 - 14298.764: 98.1219% ( 15) 00:10:12.064 14298.764 - 14358.342: 98.2220% ( 13) 00:10:12.064 14358.342 - 14417.920: 98.3143% ( 12) 00:10:12.064 14417.920 - 14477.498: 98.4144% ( 13) 00:10:12.064 14477.498 - 14537.076: 98.4760% ( 8) 00:10:12.064 14537.076 - 14596.655: 98.5530% ( 10) 00:10:12.064 14596.655 - 14656.233: 98.6376% ( 11) 00:10:12.064 14656.233 - 14715.811: 98.7069% ( 9) 00:10:12.064 14715.811 - 14775.389: 98.7762% ( 9) 00:10:12.064 14775.389 - 14834.967: 98.8300% ( 7) 00:10:12.064 14834.967 - 14894.545: 98.8839% ( 7) 00:10:12.064 14894.545 - 14954.124: 98.9301% ( 6) 00:10:12.064 14954.124 - 15013.702: 98.9840% ( 7) 00:10:12.064 15013.702 - 15073.280: 99.0148% ( 4) 00:10:12.064 22163.084 - 22282.240: 99.0379% ( 3) 00:10:12.064 22282.240 - 22401.396: 99.0764% ( 5) 00:10:12.064 22401.396 - 22520.553: 99.1225% ( 6) 00:10:12.064 22520.553 - 22639.709: 99.1687% ( 6) 00:10:12.064 22639.709 - 22758.865: 99.2072% ( 5) 00:10:12.064 22758.865 - 22878.022: 99.2457% ( 5) 00:10:12.064 22878.022 - 22997.178: 99.2919% ( 6) 00:10:12.064 22997.178 - 23116.335: 99.3304% ( 5) 00:10:12.064 23116.335 - 23235.491: 99.3688% ( 5) 00:10:12.064 23235.491 - 23354.647: 99.4150% ( 6) 00:10:12.064 23354.647 - 23473.804: 99.4612% ( 6) 00:10:12.064 23473.804 - 23592.960: 99.4920% ( 4) 00:10:12.064 23592.960 - 23712.116: 99.5074% ( 2) 00:10:12.064 28597.527 - 28716.684: 99.5459% ( 5) 00:10:12.064 28716.684 - 28835.840: 99.5921% ( 6) 00:10:12.064 28835.840 - 28954.996: 99.6228% ( 4) 00:10:12.065 28954.996 - 29074.153: 99.6613% ( 5) 00:10:12.065 29074.153 - 29193.309: 99.6998% ( 5) 00:10:12.065 29193.309 - 29312.465: 99.7383% ( 5) 00:10:12.065 29312.465 - 29431.622: 99.7845% ( 6) 00:10:12.065 29431.622 - 29550.778: 99.8230% ( 5) 00:10:12.065 29550.778 - 29669.935: 99.8692% ( 6) 00:10:12.065 29669.935 - 29789.091: 99.9076% ( 5) 00:10:12.065 29789.091 - 29908.247: 99.9538% ( 6) 00:10:12.065 29908.247 - 30027.404: 99.9769% ( 3) 00:10:12.065 30027.404 - 30146.560: 100.0000% ( 3) 00:10:12.065 00:10:12.065 13:09:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:13.014 Initializing NVMe Controllers 00:10:13.014 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:13.014 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:13.014 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:13.014 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:13.014 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:13.014 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:13.014 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:13.014 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:13.014 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:13.014 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:13.014 Initialization complete. Launching workers. 00:10:13.014 ======================================================== 00:10:13.014 Latency(us) 00:10:13.014 Device Information : IOPS MiB/s Average min max 00:10:13.014 PCIE (0000:00:10.0) NSID 1 from core 0: 11559.83 135.47 11081.41 8727.64 35562.79 00:10:13.014 PCIE (0000:00:11.0) NSID 1 from core 0: 11559.83 135.47 11069.66 8624.75 34139.25 00:10:13.014 PCIE (0000:00:13.0) NSID 1 from core 0: 11559.83 135.47 11054.90 7198.61 33888.20 00:10:13.015 PCIE (0000:00:12.0) NSID 1 from core 0: 11559.83 135.47 11039.34 7111.33 32704.57 00:10:13.015 PCIE (0000:00:12.0) NSID 2 from core 0: 11559.83 135.47 11023.41 6689.48 31654.03 00:10:13.015 PCIE (0000:00:12.0) NSID 3 from core 0: 11559.83 135.47 11007.54 6233.56 30404.83 00:10:13.015 ======================================================== 00:10:13.015 Total : 69358.97 812.80 11046.04 6233.56 35562.79 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9472.931us 00:10:13.015 10.00000% : 9889.978us 00:10:13.015 25.00000% : 10247.447us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11439.011us 00:10:13.015 90.00000% : 12153.949us 00:10:13.015 95.00000% : 12570.996us 00:10:13.015 98.00000% : 13405.091us 00:10:13.015 99.00000% : 24427.055us 00:10:13.015 99.50000% : 34078.720us 00:10:13.015 99.90000% : 35270.284us 00:10:13.015 99.99000% : 35508.596us 00:10:13.015 99.99900% : 35746.909us 00:10:13.015 99.99990% : 35746.909us 00:10:13.015 99.99999% : 35746.909us 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9711.244us 00:10:13.015 10.00000% : 10009.135us 00:10:13.015 25.00000% : 10307.025us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11379.433us 00:10:13.015 90.00000% : 12034.793us 00:10:13.015 95.00000% : 12511.418us 00:10:13.015 98.00000% : 13047.622us 00:10:13.015 99.00000% : 24784.524us 00:10:13.015 99.50000% : 32172.218us 00:10:13.015 99.90000% : 34078.720us 00:10:13.015 99.99000% : 34317.033us 00:10:13.015 99.99900% : 34317.033us 00:10:13.015 99.99990% : 34317.033us 00:10:13.015 99.99999% : 34317.033us 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9651.665us 00:10:13.015 10.00000% : 10009.135us 00:10:13.015 25.00000% : 10307.025us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11379.433us 00:10:13.015 90.00000% : 12034.793us 00:10:13.015 95.00000% : 12511.418us 00:10:13.015 98.00000% : 13107.200us 00:10:13.015 99.00000% : 24427.055us 00:10:13.015 99.50000% : 31933.905us 00:10:13.015 99.90000% : 33840.407us 00:10:13.015 99.99000% : 34078.720us 00:10:13.015 99.99900% : 34078.720us 00:10:13.015 99.99990% : 34078.720us 00:10:13.015 99.99999% : 34078.720us 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9711.244us 00:10:13.015 10.00000% : 10009.135us 00:10:13.015 25.00000% : 10307.025us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11379.433us 00:10:13.015 90.00000% : 11975.215us 00:10:13.015 95.00000% : 12451.840us 00:10:13.015 98.00000% : 13107.200us 00:10:13.015 99.00000% : 22997.178us 00:10:13.015 99.50000% : 31457.280us 00:10:13.015 99.90000% : 32648.844us 00:10:13.015 99.99000% : 32887.156us 00:10:13.015 99.99900% : 32887.156us 00:10:13.015 99.99990% : 32887.156us 00:10:13.015 99.99999% : 32887.156us 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9651.665us 00:10:13.015 10.00000% : 10009.135us 00:10:13.015 25.00000% : 10307.025us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11379.433us 00:10:13.015 90.00000% : 11975.215us 00:10:13.015 95.00000% : 12392.262us 00:10:13.015 98.00000% : 13047.622us 00:10:13.015 99.00000% : 21924.771us 00:10:13.015 99.50000% : 29431.622us 00:10:13.015 99.90000% : 31457.280us 00:10:13.015 99.99000% : 31695.593us 00:10:13.015 99.99900% : 31695.593us 00:10:13.015 99.99990% : 31695.593us 00:10:13.015 99.99999% : 31695.593us 00:10:13.015 00:10:13.015 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:13.015 ================================================================================= 00:10:13.015 1.00000% : 9532.509us 00:10:13.015 10.00000% : 10009.135us 00:10:13.015 25.00000% : 10307.025us 00:10:13.015 50.00000% : 10783.651us 00:10:13.015 75.00000% : 11379.433us 00:10:13.015 90.00000% : 12034.793us 00:10:13.015 95.00000% : 12392.262us 00:10:13.015 98.00000% : 13226.356us 00:10:13.015 99.00000% : 20852.364us 00:10:13.015 99.50000% : 28359.215us 00:10:13.015 99.90000% : 30265.716us 00:10:13.015 99.99000% : 30384.873us 00:10:13.015 99.99900% : 30504.029us 00:10:13.015 99.99990% : 30504.029us 00:10:13.015 99.99999% : 30504.029us 00:10:13.015 00:10:13.015 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:13.015 ============================================================================== 00:10:13.015 Range in us Cumulative IO count 00:10:13.015 8698.415 - 8757.993: 0.0173% ( 2) 00:10:13.015 8757.993 - 8817.571: 0.0432% ( 3) 00:10:13.015 8817.571 - 8877.149: 0.0863% ( 5) 00:10:13.015 8877.149 - 8936.727: 0.1727% ( 10) 00:10:13.015 8936.727 - 8996.305: 0.2244% ( 6) 00:10:13.015 8996.305 - 9055.884: 0.2590% ( 4) 00:10:13.015 9055.884 - 9115.462: 0.3194% ( 7) 00:10:13.015 9115.462 - 9175.040: 0.4489% ( 15) 00:10:13.015 9175.040 - 9234.618: 0.4662% ( 2) 00:10:13.015 9234.618 - 9294.196: 0.5007% ( 4) 00:10:13.015 9294.196 - 9353.775: 0.6215% ( 14) 00:10:13.015 9353.775 - 9413.353: 0.9669% ( 40) 00:10:13.015 9413.353 - 9472.931: 1.6057% ( 74) 00:10:13.015 9472.931 - 9532.509: 2.4171% ( 94) 00:10:13.015 9532.509 - 9592.087: 3.4962% ( 125) 00:10:13.015 9592.087 - 9651.665: 4.8170% ( 153) 00:10:13.015 9651.665 - 9711.244: 6.6126% ( 208) 00:10:13.015 9711.244 - 9770.822: 8.3736% ( 204) 00:10:13.015 9770.822 - 9830.400: 9.9620% ( 184) 00:10:13.015 9830.400 - 9889.978: 11.6195% ( 192) 00:10:13.015 9889.978 - 9949.556: 13.8467% ( 258) 00:10:13.015 9949.556 - 10009.135: 16.3674% ( 292) 00:10:13.015 10009.135 - 10068.713: 18.7327% ( 274) 00:10:13.015 10068.713 - 10128.291: 21.0635% ( 270) 00:10:13.015 10128.291 - 10187.869: 23.4634% ( 278) 00:10:13.015 10187.869 - 10247.447: 25.7251% ( 262) 00:10:13.015 10247.447 - 10307.025: 28.0387% ( 268) 00:10:13.015 10307.025 - 10366.604: 30.4385% ( 278) 00:10:13.015 10366.604 - 10426.182: 32.8470% ( 279) 00:10:13.015 10426.182 - 10485.760: 35.1692% ( 269) 00:10:13.015 10485.760 - 10545.338: 37.9921% ( 327) 00:10:13.015 10545.338 - 10604.916: 41.1084% ( 361) 00:10:13.015 10604.916 - 10664.495: 44.6305% ( 408) 00:10:13.015 10664.495 - 10724.073: 47.6692% ( 352) 00:10:13.015 10724.073 - 10783.651: 50.3367% ( 309) 00:10:13.015 10783.651 - 10843.229: 53.2977% ( 343) 00:10:13.015 10843.229 - 10902.807: 55.9997% ( 313) 00:10:13.015 10902.807 - 10962.385: 58.4858% ( 288) 00:10:13.015 10962.385 - 11021.964: 60.5749% ( 242) 00:10:13.015 11021.964 - 11081.542: 62.6813% ( 244) 00:10:13.015 11081.542 - 11141.120: 64.8394% ( 250) 00:10:13.015 11141.120 - 11200.698: 67.2911% ( 284) 00:10:13.015 11200.698 - 11260.276: 69.5356% ( 260) 00:10:13.015 11260.276 - 11319.855: 71.7196% ( 253) 00:10:13.015 11319.855 - 11379.433: 73.5238% ( 209) 00:10:13.015 11379.433 - 11439.011: 75.4489% ( 223) 00:10:13.015 11439.011 - 11498.589: 77.1064% ( 192) 00:10:13.015 11498.589 - 11558.167: 78.4271% ( 153) 00:10:13.015 11558.167 - 11617.745: 79.9551% ( 177) 00:10:13.015 11617.745 - 11677.324: 81.3104% ( 157) 00:10:13.015 11677.324 - 11736.902: 82.6140% ( 151) 00:10:13.015 11736.902 - 11796.480: 84.2196% ( 186) 00:10:13.015 11796.480 - 11856.058: 85.5663% ( 156) 00:10:13.016 11856.058 - 11915.636: 86.9389% ( 159) 00:10:13.016 11915.636 - 11975.215: 88.0180% ( 125) 00:10:13.016 11975.215 - 12034.793: 88.9157% ( 104) 00:10:13.016 12034.793 - 12094.371: 89.8653% ( 110) 00:10:13.016 12094.371 - 12153.949: 90.7286% ( 100) 00:10:13.016 12153.949 - 12213.527: 91.4969% ( 89) 00:10:13.016 12213.527 - 12273.105: 92.1875% ( 80) 00:10:13.016 12273.105 - 12332.684: 92.7141% ( 61) 00:10:13.016 12332.684 - 12392.262: 93.4824% ( 89) 00:10:13.016 12392.262 - 12451.840: 93.9831% ( 58) 00:10:13.016 12451.840 - 12511.418: 94.6478% ( 77) 00:10:13.016 12511.418 - 12570.996: 95.1916% ( 63) 00:10:13.016 12570.996 - 12630.575: 95.5715% ( 44) 00:10:13.016 12630.575 - 12690.153: 95.9599% ( 45) 00:10:13.016 12690.153 - 12749.731: 96.2880% ( 38) 00:10:13.016 12749.731 - 12809.309: 96.5470% ( 30) 00:10:13.016 12809.309 - 12868.887: 96.8232% ( 32) 00:10:13.016 12868.887 - 12928.465: 97.0304% ( 24) 00:10:13.016 12928.465 - 12988.044: 97.2117% ( 21) 00:10:13.016 12988.044 - 13047.622: 97.4275% ( 25) 00:10:13.016 13047.622 - 13107.200: 97.5397% ( 13) 00:10:13.016 13107.200 - 13166.778: 97.6433% ( 12) 00:10:13.016 13166.778 - 13226.356: 97.7296% ( 10) 00:10:13.016 13226.356 - 13285.935: 97.8591% ( 15) 00:10:13.016 13285.935 - 13345.513: 97.9627% ( 12) 00:10:13.016 13345.513 - 13405.091: 98.0404% ( 9) 00:10:13.016 13405.091 - 13464.669: 98.1008% ( 7) 00:10:13.016 13464.669 - 13524.247: 98.1699% ( 8) 00:10:13.016 13524.247 - 13583.825: 98.2390% ( 8) 00:10:13.016 13583.825 - 13643.404: 98.2735% ( 4) 00:10:13.016 13643.404 - 13702.982: 98.2907% ( 2) 00:10:13.016 13702.982 - 13762.560: 98.3080% ( 2) 00:10:13.016 13762.560 - 13822.138: 98.3943% ( 10) 00:10:13.016 13822.138 - 13881.716: 98.4720% ( 9) 00:10:13.016 13881.716 - 13941.295: 98.4893% ( 2) 00:10:13.016 13941.295 - 14000.873: 98.5411% ( 6) 00:10:13.016 14000.873 - 14060.451: 98.5756% ( 4) 00:10:13.016 14060.451 - 14120.029: 98.6015% ( 3) 00:10:13.016 14120.029 - 14179.607: 98.6447% ( 5) 00:10:13.016 14179.607 - 14239.185: 98.6706% ( 3) 00:10:13.016 14239.185 - 14298.764: 98.6792% ( 1) 00:10:13.016 14298.764 - 14358.342: 98.7051% ( 3) 00:10:13.016 14358.342 - 14417.920: 98.7224% ( 2) 00:10:13.016 14477.498 - 14537.076: 98.7396% ( 2) 00:10:13.016 14537.076 - 14596.655: 98.7742% ( 4) 00:10:13.016 14596.655 - 14656.233: 98.7914% ( 2) 00:10:13.016 14656.233 - 14715.811: 98.8173% ( 3) 00:10:13.016 14715.811 - 14775.389: 98.8432% ( 3) 00:10:13.016 14775.389 - 14834.967: 98.8605% ( 2) 00:10:13.016 14834.967 - 14894.545: 98.8778% ( 2) 00:10:13.016 14894.545 - 14954.124: 98.8950% ( 2) 00:10:13.016 23950.429 - 24069.585: 98.9382% ( 5) 00:10:13.016 24069.585 - 24188.742: 98.9641% ( 3) 00:10:13.016 24188.742 - 24307.898: 98.9986% ( 4) 00:10:13.016 24307.898 - 24427.055: 99.0590% ( 7) 00:10:13.016 24427.055 - 24546.211: 99.0849% ( 3) 00:10:13.016 24546.211 - 24665.367: 99.1281% ( 5) 00:10:13.016 24665.367 - 24784.524: 99.1626% ( 4) 00:10:13.016 24784.524 - 24903.680: 99.2058% ( 5) 00:10:13.016 24903.680 - 25022.836: 99.2490% ( 5) 00:10:13.016 25022.836 - 25141.993: 99.2921% ( 5) 00:10:13.016 25141.993 - 25261.149: 99.3267% ( 4) 00:10:13.016 25261.149 - 25380.305: 99.3785% ( 6) 00:10:13.016 25380.305 - 25499.462: 99.4130% ( 4) 00:10:13.016 25499.462 - 25618.618: 99.4389% ( 3) 00:10:13.016 25618.618 - 25737.775: 99.4475% ( 1) 00:10:13.016 33602.095 - 33840.407: 99.4561% ( 1) 00:10:13.016 33840.407 - 34078.720: 99.5252% ( 8) 00:10:13.016 34078.720 - 34317.033: 99.6029% ( 9) 00:10:13.016 34317.033 - 34555.345: 99.6892% ( 10) 00:10:13.016 34555.345 - 34793.658: 99.7669% ( 9) 00:10:13.016 34793.658 - 35031.971: 99.8446% ( 9) 00:10:13.016 35031.971 - 35270.284: 99.9309% ( 10) 00:10:13.016 35270.284 - 35508.596: 99.9914% ( 7) 00:10:13.016 35508.596 - 35746.909: 100.0000% ( 1) 00:10:13.016 00:10:13.016 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:13.016 ============================================================================== 00:10:13.016 Range in us Cumulative IO count 00:10:13.016 8579.258 - 8638.836: 0.0173% ( 2) 00:10:13.016 8638.836 - 8698.415: 0.1036% ( 10) 00:10:13.016 8698.415 - 8757.993: 0.2072% ( 12) 00:10:13.016 8757.993 - 8817.571: 0.3367% ( 15) 00:10:13.016 8817.571 - 8877.149: 0.3885% ( 6) 00:10:13.016 8877.149 - 8936.727: 0.4057% ( 2) 00:10:13.016 8936.727 - 8996.305: 0.4316% ( 3) 00:10:13.016 8996.305 - 9055.884: 0.4575% ( 3) 00:10:13.016 9055.884 - 9115.462: 0.4834% ( 3) 00:10:13.016 9115.462 - 9175.040: 0.5093% ( 3) 00:10:13.016 9175.040 - 9234.618: 0.5352% ( 3) 00:10:13.016 9234.618 - 9294.196: 0.5611% ( 3) 00:10:13.016 9294.196 - 9353.775: 0.5698% ( 1) 00:10:13.016 9413.353 - 9472.931: 0.5870% ( 2) 00:10:13.016 9472.931 - 9532.509: 0.6215% ( 4) 00:10:13.016 9532.509 - 9592.087: 0.6906% ( 8) 00:10:13.016 9592.087 - 9651.665: 0.9669% ( 32) 00:10:13.016 9651.665 - 9711.244: 1.4848% ( 60) 00:10:13.016 9711.244 - 9770.822: 2.3653% ( 102) 00:10:13.016 9770.822 - 9830.400: 3.8501% ( 172) 00:10:13.016 9830.400 - 9889.978: 5.8011% ( 226) 00:10:13.016 9889.978 - 9949.556: 8.3218% ( 292) 00:10:13.016 9949.556 - 10009.135: 11.6626% ( 387) 00:10:13.016 10009.135 - 10068.713: 14.9689% ( 383) 00:10:13.016 10068.713 - 10128.291: 17.9558% ( 346) 00:10:13.016 10128.291 - 10187.869: 21.0031% ( 353) 00:10:13.016 10187.869 - 10247.447: 24.8273% ( 443) 00:10:13.016 10247.447 - 10307.025: 28.1854% ( 389) 00:10:13.016 10307.025 - 10366.604: 31.1637% ( 345) 00:10:13.016 10366.604 - 10426.182: 33.9693% ( 325) 00:10:13.016 10426.182 - 10485.760: 36.8698% ( 336) 00:10:13.016 10485.760 - 10545.338: 39.5546% ( 311) 00:10:13.016 10545.338 - 10604.916: 42.6191% ( 355) 00:10:13.016 10604.916 - 10664.495: 45.3729% ( 319) 00:10:13.016 10664.495 - 10724.073: 48.4548% ( 357) 00:10:13.016 10724.073 - 10783.651: 50.9237% ( 286) 00:10:13.016 10783.651 - 10843.229: 53.4271% ( 290) 00:10:13.016 10843.229 - 10902.807: 55.8443% ( 280) 00:10:13.016 10902.807 - 10962.385: 58.7794% ( 340) 00:10:13.016 10962.385 - 11021.964: 61.7403% ( 343) 00:10:13.016 11021.964 - 11081.542: 64.1316% ( 277) 00:10:13.016 11081.542 - 11141.120: 66.6091% ( 287) 00:10:13.016 11141.120 - 11200.698: 69.0521% ( 283) 00:10:13.016 11200.698 - 11260.276: 71.2880% ( 259) 00:10:13.016 11260.276 - 11319.855: 73.5670% ( 264) 00:10:13.016 11319.855 - 11379.433: 75.4057% ( 213) 00:10:13.016 11379.433 - 11439.011: 76.9941% ( 184) 00:10:13.016 11439.011 - 11498.589: 78.6084% ( 187) 00:10:13.016 11498.589 - 11558.167: 80.1968% ( 184) 00:10:13.016 11558.167 - 11617.745: 81.5262% ( 154) 00:10:13.016 11617.745 - 11677.324: 82.7952% ( 147) 00:10:13.016 11677.324 - 11736.902: 84.1160% ( 153) 00:10:13.016 11736.902 - 11796.480: 85.5318% ( 164) 00:10:13.016 11796.480 - 11856.058: 86.8526% ( 153) 00:10:13.016 11856.058 - 11915.636: 88.2079% ( 157) 00:10:13.016 11915.636 - 11975.215: 89.2610% ( 122) 00:10:13.016 11975.215 - 12034.793: 90.4006% ( 132) 00:10:13.017 12034.793 - 12094.371: 91.3156% ( 106) 00:10:13.017 12094.371 - 12153.949: 92.1184% ( 93) 00:10:13.017 12153.949 - 12213.527: 92.7314% ( 71) 00:10:13.017 12213.527 - 12273.105: 93.3788% ( 75) 00:10:13.017 12273.105 - 12332.684: 93.9140% ( 62) 00:10:13.017 12332.684 - 12392.262: 94.3715% ( 53) 00:10:13.017 12392.262 - 12451.840: 94.7686% ( 46) 00:10:13.017 12451.840 - 12511.418: 95.1571% ( 45) 00:10:13.017 12511.418 - 12570.996: 95.5197% ( 42) 00:10:13.017 12570.996 - 12630.575: 95.8650% ( 40) 00:10:13.017 12630.575 - 12690.153: 96.1844% ( 37) 00:10:13.017 12690.153 - 12749.731: 96.5038% ( 37) 00:10:13.017 12749.731 - 12809.309: 96.8318% ( 38) 00:10:13.017 12809.309 - 12868.887: 97.0994% ( 31) 00:10:13.017 12868.887 - 12928.465: 97.5224% ( 49) 00:10:13.017 12928.465 - 12988.044: 97.7814% ( 30) 00:10:13.017 12988.044 - 13047.622: 98.0059% ( 26) 00:10:13.017 13047.622 - 13107.200: 98.1526% ( 17) 00:10:13.017 13107.200 - 13166.778: 98.3080% ( 18) 00:10:13.017 13166.778 - 13226.356: 98.4116% ( 12) 00:10:13.017 13226.356 - 13285.935: 98.5411% ( 15) 00:10:13.017 13285.935 - 13345.513: 98.6360% ( 11) 00:10:13.017 13345.513 - 13405.091: 98.7137% ( 9) 00:10:13.017 13405.091 - 13464.669: 98.7655% ( 6) 00:10:13.017 13464.669 - 13524.247: 98.8260% ( 7) 00:10:13.017 13524.247 - 13583.825: 98.8691% ( 5) 00:10:13.017 13583.825 - 13643.404: 98.8950% ( 3) 00:10:13.017 24546.211 - 24665.367: 98.9555% ( 7) 00:10:13.017 24665.367 - 24784.524: 99.0504% ( 11) 00:10:13.017 24784.524 - 24903.680: 99.1713% ( 14) 00:10:13.017 24903.680 - 25022.836: 99.2058% ( 4) 00:10:13.017 25022.836 - 25141.993: 99.2490% ( 5) 00:10:13.017 25141.993 - 25261.149: 99.2835% ( 4) 00:10:13.017 25261.149 - 25380.305: 99.3180% ( 4) 00:10:13.017 25380.305 - 25499.462: 99.3612% ( 5) 00:10:13.017 25499.462 - 25618.618: 99.3957% ( 4) 00:10:13.017 25618.618 - 25737.775: 99.4389% ( 5) 00:10:13.017 25737.775 - 25856.931: 99.4475% ( 1) 00:10:13.017 31933.905 - 32172.218: 99.5425% ( 11) 00:10:13.017 32648.844 - 32887.156: 99.5511% ( 1) 00:10:13.017 32887.156 - 33125.469: 99.6115% ( 7) 00:10:13.017 33125.469 - 33363.782: 99.6892% ( 9) 00:10:13.017 33363.782 - 33602.095: 99.7842% ( 11) 00:10:13.017 33602.095 - 33840.407: 99.8791% ( 11) 00:10:13.017 33840.407 - 34078.720: 99.9741% ( 11) 00:10:13.017 34078.720 - 34317.033: 100.0000% ( 3) 00:10:13.017 00:10:13.017 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:13.017 ============================================================================== 00:10:13.017 Range in us Cumulative IO count 00:10:13.017 7179.171 - 7208.960: 0.0086% ( 1) 00:10:13.017 7477.062 - 7506.851: 0.0345% ( 3) 00:10:13.017 7506.851 - 7536.640: 0.0950% ( 7) 00:10:13.017 7536.640 - 7566.429: 0.1985% ( 12) 00:10:13.017 7566.429 - 7596.218: 0.2849% ( 10) 00:10:13.017 7596.218 - 7626.007: 0.3712% ( 10) 00:10:13.017 7626.007 - 7685.585: 0.4057% ( 4) 00:10:13.017 7685.585 - 7745.164: 0.4316% ( 3) 00:10:13.017 7745.164 - 7804.742: 0.4575% ( 3) 00:10:13.017 7804.742 - 7864.320: 0.4834% ( 3) 00:10:13.017 7864.320 - 7923.898: 0.5093% ( 3) 00:10:13.017 7923.898 - 7983.476: 0.5266% ( 2) 00:10:13.017 7983.476 - 8043.055: 0.5525% ( 3) 00:10:13.017 9234.618 - 9294.196: 0.5611% ( 1) 00:10:13.017 9294.196 - 9353.775: 0.5698% ( 1) 00:10:13.017 9353.775 - 9413.353: 0.5870% ( 2) 00:10:13.017 9413.353 - 9472.931: 0.6215% ( 4) 00:10:13.017 9472.931 - 9532.509: 0.7079% ( 10) 00:10:13.017 9532.509 - 9592.087: 0.9151% ( 24) 00:10:13.017 9592.087 - 9651.665: 1.2776% ( 42) 00:10:13.017 9651.665 - 9711.244: 1.8646% ( 68) 00:10:13.017 9711.244 - 9770.822: 2.8142% ( 110) 00:10:13.017 9770.822 - 9830.400: 4.4803% ( 193) 00:10:13.017 9830.400 - 9889.978: 6.1723% ( 196) 00:10:13.017 9889.978 - 9949.556: 7.9938% ( 211) 00:10:13.017 9949.556 - 10009.135: 10.8771% ( 334) 00:10:13.017 10009.135 - 10068.713: 13.5877% ( 314) 00:10:13.017 10068.713 - 10128.291: 17.2997% ( 430) 00:10:13.017 10128.291 - 10187.869: 20.8477% ( 411) 00:10:13.017 10187.869 - 10247.447: 24.2576% ( 395) 00:10:13.017 10247.447 - 10307.025: 27.7020% ( 399) 00:10:13.017 10307.025 - 10366.604: 31.2932% ( 416) 00:10:13.017 10366.604 - 10426.182: 33.9779% ( 311) 00:10:13.017 10426.182 - 10485.760: 37.1547% ( 368) 00:10:13.017 10485.760 - 10545.338: 40.5300% ( 391) 00:10:13.017 10545.338 - 10604.916: 43.2234% ( 312) 00:10:13.017 10604.916 - 10664.495: 45.5715% ( 272) 00:10:13.017 10664.495 - 10724.073: 48.1440% ( 298) 00:10:13.017 10724.073 - 10783.651: 50.9064% ( 320) 00:10:13.017 10783.651 - 10843.229: 53.6516% ( 318) 00:10:13.017 10843.229 - 10902.807: 56.5349% ( 334) 00:10:13.017 10902.807 - 10962.385: 59.2369% ( 313) 00:10:13.017 10962.385 - 11021.964: 61.6713% ( 282) 00:10:13.017 11021.964 - 11081.542: 63.7776% ( 244) 00:10:13.017 11081.542 - 11141.120: 66.1602% ( 276) 00:10:13.017 11141.120 - 11200.698: 68.8191% ( 308) 00:10:13.017 11200.698 - 11260.276: 71.1585% ( 271) 00:10:13.017 11260.276 - 11319.855: 73.6102% ( 284) 00:10:13.017 11319.855 - 11379.433: 75.6906% ( 241) 00:10:13.017 11379.433 - 11439.011: 77.4689% ( 206) 00:10:13.017 11439.011 - 11498.589: 79.4199% ( 226) 00:10:13.017 11498.589 - 11558.167: 81.1982% ( 206) 00:10:13.017 11558.167 - 11617.745: 82.5190% ( 153) 00:10:13.017 11617.745 - 11677.324: 83.9865% ( 170) 00:10:13.017 11677.324 - 11736.902: 85.4972% ( 175) 00:10:13.017 11736.902 - 11796.480: 86.6108% ( 129) 00:10:13.017 11796.480 - 11856.058: 87.5691% ( 111) 00:10:13.017 11856.058 - 11915.636: 88.6050% ( 120) 00:10:13.017 11915.636 - 11975.215: 89.5805% ( 113) 00:10:13.017 11975.215 - 12034.793: 90.4265% ( 98) 00:10:13.017 12034.793 - 12094.371: 91.3588% ( 108) 00:10:13.017 12094.371 - 12153.949: 92.2825% ( 107) 00:10:13.017 12153.949 - 12213.527: 92.9558% ( 78) 00:10:13.017 12213.527 - 12273.105: 93.5256% ( 66) 00:10:13.017 12273.105 - 12332.684: 94.0953% ( 66) 00:10:13.017 12332.684 - 12392.262: 94.4838% ( 45) 00:10:13.017 12392.262 - 12451.840: 94.8722% ( 45) 00:10:13.017 12451.840 - 12511.418: 95.2866% ( 48) 00:10:13.017 12511.418 - 12570.996: 95.6146% ( 38) 00:10:13.017 12570.996 - 12630.575: 95.8823% ( 31) 00:10:13.017 12630.575 - 12690.153: 96.1585% ( 32) 00:10:13.017 12690.153 - 12749.731: 96.4434% ( 33) 00:10:13.017 12749.731 - 12809.309: 96.7196% ( 32) 00:10:13.017 12809.309 - 12868.887: 97.0735% ( 41) 00:10:13.017 12868.887 - 12928.465: 97.3843% ( 36) 00:10:13.017 12928.465 - 12988.044: 97.6778% ( 34) 00:10:13.017 12988.044 - 13047.622: 97.8850% ( 24) 00:10:13.017 13047.622 - 13107.200: 98.0404% ( 18) 00:10:13.017 13107.200 - 13166.778: 98.2562% ( 25) 00:10:13.017 13166.778 - 13226.356: 98.3598% ( 12) 00:10:13.017 13226.356 - 13285.935: 98.4289% ( 8) 00:10:13.017 13285.935 - 13345.513: 98.5152% ( 10) 00:10:13.018 13345.513 - 13405.091: 98.5843% ( 8) 00:10:13.018 13405.091 - 13464.669: 98.6533% ( 8) 00:10:13.018 13464.669 - 13524.247: 98.7137% ( 7) 00:10:13.018 13524.247 - 13583.825: 98.7569% ( 5) 00:10:13.018 13583.825 - 13643.404: 98.7914% ( 4) 00:10:13.018 13643.404 - 13702.982: 98.8346% ( 5) 00:10:13.018 13702.982 - 13762.560: 98.8605% ( 3) 00:10:13.018 13762.560 - 13822.138: 98.8864% ( 3) 00:10:13.018 13822.138 - 13881.716: 98.8950% ( 1) 00:10:13.018 24069.585 - 24188.742: 98.9037% ( 1) 00:10:13.018 24188.742 - 24307.898: 98.9555% ( 6) 00:10:13.018 24307.898 - 24427.055: 99.0245% ( 8) 00:10:13.018 24427.055 - 24546.211: 99.0849% ( 7) 00:10:13.018 24546.211 - 24665.367: 99.1540% ( 8) 00:10:13.018 24665.367 - 24784.524: 99.1885% ( 4) 00:10:13.018 24784.524 - 24903.680: 99.2231% ( 4) 00:10:13.018 24903.680 - 25022.836: 99.2662% ( 5) 00:10:13.018 25022.836 - 25141.993: 99.3094% ( 5) 00:10:13.018 25141.993 - 25261.149: 99.3439% ( 4) 00:10:13.018 25261.149 - 25380.305: 99.3785% ( 4) 00:10:13.018 25380.305 - 25499.462: 99.4216% ( 5) 00:10:13.018 25499.462 - 25618.618: 99.4475% ( 3) 00:10:13.018 31695.593 - 31933.905: 99.5166% ( 8) 00:10:13.018 31933.905 - 32172.218: 99.5425% ( 3) 00:10:13.018 32410.531 - 32648.844: 99.5511% ( 1) 00:10:13.018 32648.844 - 32887.156: 99.6288% ( 9) 00:10:13.018 32887.156 - 33125.469: 99.7151% ( 10) 00:10:13.018 33125.469 - 33363.782: 99.8015% ( 10) 00:10:13.018 33363.782 - 33602.095: 99.8878% ( 10) 00:10:13.018 33602.095 - 33840.407: 99.9741% ( 10) 00:10:13.018 33840.407 - 34078.720: 100.0000% ( 3) 00:10:13.018 00:10:13.018 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:13.018 ============================================================================== 00:10:13.018 Range in us Cumulative IO count 00:10:13.018 7089.804 - 7119.593: 0.0259% ( 3) 00:10:13.018 7119.593 - 7149.382: 0.0691% ( 5) 00:10:13.018 7149.382 - 7179.171: 0.0950% ( 3) 00:10:13.018 7179.171 - 7208.960: 0.1209% ( 3) 00:10:13.018 7208.960 - 7238.749: 0.1899% ( 8) 00:10:13.018 7238.749 - 7268.538: 0.2503% ( 7) 00:10:13.018 7268.538 - 7298.327: 0.3367% ( 10) 00:10:13.018 7298.327 - 7328.116: 0.3712% ( 4) 00:10:13.018 7328.116 - 7357.905: 0.3798% ( 1) 00:10:13.018 7357.905 - 7387.695: 0.3971% ( 2) 00:10:13.018 7387.695 - 7417.484: 0.4057% ( 1) 00:10:13.018 7417.484 - 7447.273: 0.4144% ( 1) 00:10:13.018 7447.273 - 7477.062: 0.4316% ( 2) 00:10:13.018 7477.062 - 7506.851: 0.4403% ( 1) 00:10:13.018 7506.851 - 7536.640: 0.4575% ( 2) 00:10:13.018 7536.640 - 7566.429: 0.4662% ( 1) 00:10:13.018 7566.429 - 7596.218: 0.4834% ( 2) 00:10:13.018 7596.218 - 7626.007: 0.4921% ( 1) 00:10:13.018 7626.007 - 7685.585: 0.5180% ( 3) 00:10:13.018 7685.585 - 7745.164: 0.5439% ( 3) 00:10:13.018 7745.164 - 7804.742: 0.5525% ( 1) 00:10:13.018 9234.618 - 9294.196: 0.5611% ( 1) 00:10:13.018 9353.775 - 9413.353: 0.5698% ( 1) 00:10:13.018 9413.353 - 9472.931: 0.5956% ( 3) 00:10:13.018 9472.931 - 9532.509: 0.6561% ( 7) 00:10:13.018 9532.509 - 9592.087: 0.7769% ( 14) 00:10:13.018 9592.087 - 9651.665: 0.9410% ( 19) 00:10:13.018 9651.665 - 9711.244: 1.3726% ( 50) 00:10:13.018 9711.244 - 9770.822: 2.1840% ( 94) 00:10:13.018 9770.822 - 9830.400: 3.4185% ( 143) 00:10:13.018 9830.400 - 9889.978: 5.2486% ( 212) 00:10:13.018 9889.978 - 9949.556: 7.5276% ( 264) 00:10:13.018 9949.556 - 10009.135: 10.5836% ( 354) 00:10:13.018 10009.135 - 10068.713: 14.0711% ( 404) 00:10:13.018 10068.713 - 10128.291: 17.4551% ( 392) 00:10:13.018 10128.291 - 10187.869: 20.8650% ( 395) 00:10:13.018 10187.869 - 10247.447: 24.1108% ( 376) 00:10:13.018 10247.447 - 10307.025: 28.0473% ( 456) 00:10:13.018 10307.025 - 10366.604: 31.7852% ( 433) 00:10:13.018 10366.604 - 10426.182: 34.7462% ( 343) 00:10:13.018 10426.182 - 10485.760: 37.1547% ( 279) 00:10:13.018 10485.760 - 10545.338: 40.2106% ( 354) 00:10:13.018 10545.338 - 10604.916: 43.1975% ( 346) 00:10:13.018 10604.916 - 10664.495: 45.6146% ( 280) 00:10:13.018 10664.495 - 10724.073: 48.1785% ( 297) 00:10:13.018 10724.073 - 10783.651: 50.7338% ( 296) 00:10:13.018 10783.651 - 10843.229: 53.1077% ( 275) 00:10:13.018 10843.229 - 10902.807: 55.3090% ( 255) 00:10:13.018 10902.807 - 10962.385: 57.5104% ( 255) 00:10:13.018 10962.385 - 11021.964: 59.8325% ( 269) 00:10:13.018 11021.964 - 11081.542: 62.5000% ( 309) 00:10:13.018 11081.542 - 11141.120: 65.4523% ( 342) 00:10:13.018 11141.120 - 11200.698: 68.3443% ( 335) 00:10:13.018 11200.698 - 11260.276: 70.6923% ( 272) 00:10:13.018 11260.276 - 11319.855: 73.4979% ( 325) 00:10:13.018 11319.855 - 11379.433: 76.2863% ( 323) 00:10:13.018 11379.433 - 11439.011: 78.7465% ( 285) 00:10:13.018 11439.011 - 11498.589: 80.3695% ( 188) 00:10:13.018 11498.589 - 11558.167: 81.8888% ( 176) 00:10:13.018 11558.167 - 11617.745: 83.5031% ( 187) 00:10:13.018 11617.745 - 11677.324: 84.7980% ( 150) 00:10:13.018 11677.324 - 11736.902: 86.0497% ( 145) 00:10:13.018 11736.902 - 11796.480: 87.1720% ( 130) 00:10:13.018 11796.480 - 11856.058: 88.4841% ( 152) 00:10:13.018 11856.058 - 11915.636: 89.2524% ( 89) 00:10:13.018 11915.636 - 11975.215: 90.0294% ( 90) 00:10:13.018 11975.215 - 12034.793: 90.8926% ( 100) 00:10:13.018 12034.793 - 12094.371: 91.7472% ( 99) 00:10:13.018 12094.371 - 12153.949: 92.3343% ( 68) 00:10:13.018 12153.949 - 12213.527: 92.9299% ( 69) 00:10:13.018 12213.527 - 12273.105: 93.4047% ( 55) 00:10:13.018 12273.105 - 12332.684: 93.9313% ( 61) 00:10:13.018 12332.684 - 12392.262: 94.4924% ( 65) 00:10:13.018 12392.262 - 12451.840: 95.0708% ( 67) 00:10:13.018 12451.840 - 12511.418: 95.5110% ( 51) 00:10:13.018 12511.418 - 12570.996: 95.8132% ( 35) 00:10:13.018 12570.996 - 12630.575: 96.0635% ( 29) 00:10:13.018 12630.575 - 12690.153: 96.3743% ( 36) 00:10:13.018 12690.153 - 12749.731: 96.6419% ( 31) 00:10:13.018 12749.731 - 12809.309: 96.8750% ( 27) 00:10:13.018 12809.309 - 12868.887: 97.1858% ( 36) 00:10:13.018 12868.887 - 12928.465: 97.4793% ( 34) 00:10:13.018 12928.465 - 12988.044: 97.7383% ( 30) 00:10:13.018 12988.044 - 13047.622: 97.9800% ( 28) 00:10:13.018 13047.622 - 13107.200: 98.1699% ( 22) 00:10:13.018 13107.200 - 13166.778: 98.2821% ( 13) 00:10:13.018 13166.778 - 13226.356: 98.4202% ( 16) 00:10:13.018 13226.356 - 13285.935: 98.5497% ( 15) 00:10:13.018 13285.935 - 13345.513: 98.6274% ( 9) 00:10:13.018 13345.513 - 13405.091: 98.6792% ( 6) 00:10:13.018 13405.091 - 13464.669: 98.7224% ( 5) 00:10:13.018 13464.669 - 13524.247: 98.7742% ( 6) 00:10:13.018 13524.247 - 13583.825: 98.8087% ( 4) 00:10:13.018 13583.825 - 13643.404: 98.8346% ( 3) 00:10:13.018 13643.404 - 13702.982: 98.8605% ( 3) 00:10:13.018 13702.982 - 13762.560: 98.8778% ( 2) 00:10:13.019 13762.560 - 13822.138: 98.8950% ( 2) 00:10:13.019 22520.553 - 22639.709: 98.9037% ( 1) 00:10:13.019 22639.709 - 22758.865: 98.9468% ( 5) 00:10:13.019 22758.865 - 22878.022: 98.9900% ( 5) 00:10:13.019 22878.022 - 22997.178: 99.0418% ( 6) 00:10:13.019 22997.178 - 23116.335: 99.0849% ( 5) 00:10:13.019 23116.335 - 23235.491: 99.1281% ( 5) 00:10:13.019 23235.491 - 23354.647: 99.1713% ( 5) 00:10:13.019 23354.647 - 23473.804: 99.2058% ( 4) 00:10:13.019 23473.804 - 23592.960: 99.2576% ( 6) 00:10:13.019 23592.960 - 23712.116: 99.3094% ( 6) 00:10:13.019 23712.116 - 23831.273: 99.3439% ( 4) 00:10:13.019 23831.273 - 23950.429: 99.3957% ( 6) 00:10:13.019 23950.429 - 24069.585: 99.4216% ( 3) 00:10:13.019 24069.585 - 24188.742: 99.4475% ( 3) 00:10:13.019 30980.655 - 31218.967: 99.4561% ( 1) 00:10:13.019 31218.967 - 31457.280: 99.5425% ( 10) 00:10:13.019 31457.280 - 31695.593: 99.6115% ( 8) 00:10:13.019 31695.593 - 31933.905: 99.6979% ( 10) 00:10:13.019 31933.905 - 32172.218: 99.7928% ( 11) 00:10:13.019 32172.218 - 32410.531: 99.8791% ( 10) 00:10:13.019 32410.531 - 32648.844: 99.9741% ( 11) 00:10:13.019 32648.844 - 32887.156: 100.0000% ( 3) 00:10:13.019 00:10:13.019 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:13.019 ============================================================================== 00:10:13.019 Range in us Cumulative IO count 00:10:13.019 6672.756 - 6702.545: 0.0086% ( 1) 00:10:13.019 6702.545 - 6732.335: 0.0259% ( 2) 00:10:13.019 6732.335 - 6762.124: 0.0604% ( 4) 00:10:13.019 6762.124 - 6791.913: 0.1036% ( 5) 00:10:13.019 6791.913 - 6821.702: 0.1640% ( 7) 00:10:13.019 6821.702 - 6851.491: 0.2417% ( 9) 00:10:13.019 6851.491 - 6881.280: 0.3280% ( 10) 00:10:13.019 6881.280 - 6911.069: 0.3539% ( 3) 00:10:13.019 6911.069 - 6940.858: 0.3712% ( 2) 00:10:13.019 6940.858 - 6970.647: 0.3798% ( 1) 00:10:13.019 6970.647 - 7000.436: 0.3971% ( 2) 00:10:13.019 7000.436 - 7030.225: 0.4057% ( 1) 00:10:13.019 7030.225 - 7060.015: 0.4144% ( 1) 00:10:13.019 7060.015 - 7089.804: 0.4316% ( 2) 00:10:13.019 7089.804 - 7119.593: 0.4403% ( 1) 00:10:13.019 7119.593 - 7149.382: 0.4575% ( 2) 00:10:13.019 7149.382 - 7179.171: 0.4662% ( 1) 00:10:13.019 7179.171 - 7208.960: 0.4834% ( 2) 00:10:13.019 7208.960 - 7238.749: 0.4921% ( 1) 00:10:13.019 7238.749 - 7268.538: 0.5093% ( 2) 00:10:13.019 7268.538 - 7298.327: 0.5180% ( 1) 00:10:13.019 7298.327 - 7328.116: 0.5352% ( 2) 00:10:13.019 7328.116 - 7357.905: 0.5439% ( 1) 00:10:13.019 7357.905 - 7387.695: 0.5525% ( 1) 00:10:13.019 9234.618 - 9294.196: 0.5611% ( 1) 00:10:13.019 9353.775 - 9413.353: 0.5698% ( 1) 00:10:13.019 9413.353 - 9472.931: 0.6215% ( 6) 00:10:13.019 9472.931 - 9532.509: 0.6906% ( 8) 00:10:13.019 9532.509 - 9592.087: 0.8805% ( 22) 00:10:13.019 9592.087 - 9651.665: 1.1913% ( 36) 00:10:13.019 9651.665 - 9711.244: 1.7352% ( 63) 00:10:13.019 9711.244 - 9770.822: 2.6847% ( 110) 00:10:13.019 9770.822 - 9830.400: 3.9796% ( 150) 00:10:13.019 9830.400 - 9889.978: 5.7838% ( 209) 00:10:13.019 9889.978 - 9949.556: 8.2959% ( 291) 00:10:13.019 9949.556 - 10009.135: 11.2137% ( 338) 00:10:13.019 10009.135 - 10068.713: 14.0798% ( 332) 00:10:13.019 10068.713 - 10128.291: 17.5760% ( 405) 00:10:13.019 10128.291 - 10187.869: 21.2017% ( 420) 00:10:13.019 10187.869 - 10247.447: 24.4044% ( 371) 00:10:13.019 10247.447 - 10307.025: 27.5294% ( 362) 00:10:13.019 10307.025 - 10366.604: 31.2068% ( 426) 00:10:13.019 10366.604 - 10426.182: 34.2887% ( 357) 00:10:13.019 10426.182 - 10485.760: 36.9130% ( 304) 00:10:13.019 10485.760 - 10545.338: 39.9517% ( 352) 00:10:13.019 10545.338 - 10604.916: 43.0421% ( 358) 00:10:13.019 10604.916 - 10664.495: 45.5887% ( 295) 00:10:13.019 10664.495 - 10724.073: 48.5325% ( 341) 00:10:13.019 10724.073 - 10783.651: 51.0359% ( 290) 00:10:13.019 10783.651 - 10843.229: 53.4703% ( 282) 00:10:13.019 10843.229 - 10902.807: 56.1032% ( 305) 00:10:13.019 10902.807 - 10962.385: 58.6412% ( 294) 00:10:13.019 10962.385 - 11021.964: 61.1188% ( 287) 00:10:13.019 11021.964 - 11081.542: 63.3892% ( 263) 00:10:13.019 11081.542 - 11141.120: 65.7459% ( 273) 00:10:13.019 11141.120 - 11200.698: 68.1026% ( 273) 00:10:13.019 11200.698 - 11260.276: 70.4938% ( 277) 00:10:13.019 11260.276 - 11319.855: 72.9109% ( 280) 00:10:13.019 11319.855 - 11379.433: 75.0863% ( 252) 00:10:13.019 11379.433 - 11439.011: 76.9251% ( 213) 00:10:13.019 11439.011 - 11498.589: 79.0055% ( 241) 00:10:13.019 11498.589 - 11558.167: 80.7234% ( 199) 00:10:13.019 11558.167 - 11617.745: 82.5017% ( 206) 00:10:13.019 11617.745 - 11677.324: 83.9347% ( 166) 00:10:13.019 11677.324 - 11736.902: 85.5749% ( 190) 00:10:13.019 11736.902 - 11796.480: 86.9648% ( 161) 00:10:13.019 11796.480 - 11856.058: 88.0784% ( 129) 00:10:13.019 11856.058 - 11915.636: 89.0884% ( 117) 00:10:13.019 11915.636 - 11975.215: 90.0121% ( 107) 00:10:13.019 11975.215 - 12034.793: 90.9617% ( 110) 00:10:13.019 12034.793 - 12094.371: 92.0062% ( 121) 00:10:13.019 12094.371 - 12153.949: 92.8090% ( 93) 00:10:13.019 12153.949 - 12213.527: 93.6119% ( 93) 00:10:13.019 12213.527 - 12273.105: 94.1385% ( 61) 00:10:13.019 12273.105 - 12332.684: 94.6737% ( 62) 00:10:13.019 12332.684 - 12392.262: 95.0535% ( 44) 00:10:13.019 12392.262 - 12451.840: 95.3902% ( 39) 00:10:13.019 12451.840 - 12511.418: 95.6233% ( 27) 00:10:13.019 12511.418 - 12570.996: 95.8650% ( 28) 00:10:13.019 12570.996 - 12630.575: 96.0981% ( 27) 00:10:13.019 12630.575 - 12690.153: 96.3829% ( 33) 00:10:13.019 12690.153 - 12749.731: 96.6937% ( 36) 00:10:13.019 12749.731 - 12809.309: 97.0994% ( 47) 00:10:13.019 12809.309 - 12868.887: 97.3584% ( 30) 00:10:13.019 12868.887 - 12928.465: 97.5656% ( 24) 00:10:13.019 12928.465 - 12988.044: 97.8505% ( 33) 00:10:13.019 12988.044 - 13047.622: 98.1008% ( 29) 00:10:13.019 13047.622 - 13107.200: 98.2821% ( 21) 00:10:13.019 13107.200 - 13166.778: 98.4116% ( 15) 00:10:13.019 13166.778 - 13226.356: 98.4979% ( 10) 00:10:13.019 13226.356 - 13285.935: 98.5670% ( 8) 00:10:13.019 13285.935 - 13345.513: 98.6360% ( 8) 00:10:13.019 13345.513 - 13405.091: 98.6878% ( 6) 00:10:13.019 13405.091 - 13464.669: 98.7310% ( 5) 00:10:13.019 13464.669 - 13524.247: 98.7828% ( 6) 00:10:13.019 13524.247 - 13583.825: 98.8173% ( 4) 00:10:13.019 13583.825 - 13643.404: 98.8519% ( 4) 00:10:13.019 13643.404 - 13702.982: 98.8778% ( 3) 00:10:13.019 13702.982 - 13762.560: 98.8950% ( 2) 00:10:13.019 21686.458 - 21805.615: 98.9382% ( 5) 00:10:13.019 21805.615 - 21924.771: 99.0073% ( 8) 00:10:13.019 21924.771 - 22043.927: 99.0763% ( 8) 00:10:13.019 22043.927 - 22163.084: 99.1281% ( 6) 00:10:13.019 22163.084 - 22282.240: 99.1626% ( 4) 00:10:13.019 22282.240 - 22401.396: 99.1972% ( 4) 00:10:13.019 22401.396 - 22520.553: 99.2490% ( 6) 00:10:13.019 22520.553 - 22639.709: 99.2835% ( 4) 00:10:13.019 22639.709 - 22758.865: 99.3180% ( 4) 00:10:13.019 22758.865 - 22878.022: 99.3612% ( 5) 00:10:13.019 22878.022 - 22997.178: 99.4044% ( 5) 00:10:13.019 22997.178 - 23116.335: 99.4475% ( 5) 00:10:13.019 29193.309 - 29312.465: 99.4648% ( 2) 00:10:13.019 29312.465 - 29431.622: 99.5079% ( 5) 00:10:13.019 30146.560 - 30265.716: 99.5166% ( 1) 00:10:13.019 30265.716 - 30384.873: 99.5511% ( 4) 00:10:13.019 30384.873 - 30504.029: 99.5856% ( 4) 00:10:13.020 30504.029 - 30742.342: 99.6633% ( 9) 00:10:13.020 30742.342 - 30980.655: 99.7583% ( 11) 00:10:13.020 30980.655 - 31218.967: 99.8446% ( 10) 00:10:13.020 31218.967 - 31457.280: 99.9223% ( 9) 00:10:13.020 31457.280 - 31695.593: 100.0000% ( 9) 00:10:13.020 00:10:13.020 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:13.020 ============================================================================== 00:10:13.020 Range in us Cumulative IO count 00:10:13.020 6225.920 - 6255.709: 0.0432% ( 5) 00:10:13.020 6255.709 - 6285.498: 0.0950% ( 6) 00:10:13.020 6285.498 - 6315.287: 0.1295% ( 4) 00:10:13.020 6315.287 - 6345.076: 0.1899% ( 7) 00:10:13.020 6345.076 - 6374.865: 0.2331% ( 5) 00:10:13.020 6374.865 - 6404.655: 0.2503% ( 2) 00:10:13.020 6404.655 - 6434.444: 0.2590% ( 1) 00:10:13.020 6434.444 - 6464.233: 0.2762% ( 2) 00:10:13.020 6464.233 - 6494.022: 0.2849% ( 1) 00:10:13.020 6494.022 - 6523.811: 0.2935% ( 1) 00:10:13.020 6523.811 - 6553.600: 0.3108% ( 2) 00:10:13.020 6553.600 - 6583.389: 0.3194% ( 1) 00:10:13.020 6583.389 - 6613.178: 0.3367% ( 2) 00:10:13.020 6613.178 - 6642.967: 0.3453% ( 1) 00:10:13.020 6642.967 - 6672.756: 0.3626% ( 2) 00:10:13.020 6672.756 - 6702.545: 0.3712% ( 1) 00:10:13.020 6702.545 - 6732.335: 0.3798% ( 1) 00:10:13.020 6732.335 - 6762.124: 0.3885% ( 1) 00:10:13.020 6762.124 - 6791.913: 0.3971% ( 1) 00:10:13.020 6791.913 - 6821.702: 0.4144% ( 2) 00:10:13.020 6821.702 - 6851.491: 0.4316% ( 2) 00:10:13.020 6851.491 - 6881.280: 0.4489% ( 2) 00:10:13.020 6881.280 - 6911.069: 0.4662% ( 2) 00:10:13.020 6911.069 - 6940.858: 0.4748% ( 1) 00:10:13.020 6940.858 - 6970.647: 0.4921% ( 2) 00:10:13.020 6970.647 - 7000.436: 0.5093% ( 2) 00:10:13.020 7000.436 - 7030.225: 0.5266% ( 2) 00:10:13.020 7030.225 - 7060.015: 0.5439% ( 2) 00:10:13.020 7060.015 - 7089.804: 0.5525% ( 1) 00:10:13.020 8936.727 - 8996.305: 0.5611% ( 1) 00:10:13.020 9055.884 - 9115.462: 0.5698% ( 1) 00:10:13.020 9294.196 - 9353.775: 0.5784% ( 1) 00:10:13.020 9353.775 - 9413.353: 0.6733% ( 11) 00:10:13.020 9413.353 - 9472.931: 0.8805% ( 24) 00:10:13.020 9472.931 - 9532.509: 1.0532% ( 20) 00:10:13.020 9532.509 - 9592.087: 1.1654% ( 13) 00:10:13.020 9592.087 - 9651.665: 1.4675% ( 35) 00:10:13.020 9651.665 - 9711.244: 1.8819% ( 48) 00:10:13.020 9711.244 - 9770.822: 2.6502% ( 89) 00:10:13.020 9770.822 - 9830.400: 3.8588% ( 140) 00:10:13.020 9830.400 - 9889.978: 5.7320% ( 217) 00:10:13.020 9889.978 - 9949.556: 8.0024% ( 263) 00:10:13.020 9949.556 - 10009.135: 11.0929% ( 358) 00:10:13.020 10009.135 - 10068.713: 14.4682% ( 391) 00:10:13.020 10068.713 - 10128.291: 17.6019% ( 363) 00:10:13.020 10128.291 - 10187.869: 21.0290% ( 397) 00:10:13.020 10187.869 - 10247.447: 24.9309% ( 452) 00:10:13.020 10247.447 - 10307.025: 28.0646% ( 363) 00:10:13.020 10307.025 - 10366.604: 31.7852% ( 431) 00:10:13.020 10366.604 - 10426.182: 34.9448% ( 366) 00:10:13.020 10426.182 - 10485.760: 38.0266% ( 357) 00:10:13.020 10485.760 - 10545.338: 40.7631% ( 317) 00:10:13.020 10545.338 - 10604.916: 43.7586% ( 347) 00:10:13.020 10604.916 - 10664.495: 47.0822% ( 385) 00:10:13.020 10664.495 - 10724.073: 49.3180% ( 259) 00:10:13.020 10724.073 - 10783.651: 51.3381% ( 234) 00:10:13.020 10783.651 - 10843.229: 53.6516% ( 268) 00:10:13.020 10843.229 - 10902.807: 55.9392% ( 265) 00:10:13.020 10902.807 - 10962.385: 58.5808% ( 306) 00:10:13.020 10962.385 - 11021.964: 60.7907% ( 256) 00:10:13.020 11021.964 - 11081.542: 62.8367% ( 237) 00:10:13.020 11081.542 - 11141.120: 65.2279% ( 277) 00:10:13.020 11141.120 - 11200.698: 67.5932% ( 274) 00:10:13.020 11200.698 - 11260.276: 69.8722% ( 264) 00:10:13.020 11260.276 - 11319.855: 72.2203% ( 272) 00:10:13.020 11319.855 - 11379.433: 75.2849% ( 355) 00:10:13.020 11379.433 - 11439.011: 77.6934% ( 279) 00:10:13.020 11439.011 - 11498.589: 79.2213% ( 177) 00:10:13.020 11498.589 - 11558.167: 81.2414% ( 234) 00:10:13.020 11558.167 - 11617.745: 82.5535% ( 152) 00:10:13.020 11617.745 - 11677.324: 83.9520% ( 162) 00:10:13.020 11677.324 - 11736.902: 85.2383% ( 149) 00:10:13.020 11736.902 - 11796.480: 86.4900% ( 145) 00:10:13.020 11796.480 - 11856.058: 87.6727% ( 137) 00:10:13.020 11856.058 - 11915.636: 88.6309% ( 111) 00:10:13.020 11915.636 - 11975.215: 89.5028% ( 101) 00:10:13.020 11975.215 - 12034.793: 90.5646% ( 123) 00:10:13.020 12034.793 - 12094.371: 91.6005% ( 120) 00:10:13.020 12094.371 - 12153.949: 92.8349% ( 143) 00:10:13.020 12153.949 - 12213.527: 93.5342% ( 81) 00:10:13.020 12213.527 - 12273.105: 94.0090% ( 55) 00:10:13.020 12273.105 - 12332.684: 94.4838% ( 55) 00:10:13.020 12332.684 - 12392.262: 95.0017% ( 60) 00:10:13.020 12392.262 - 12451.840: 95.3729% ( 43) 00:10:13.020 12451.840 - 12511.418: 95.6405% ( 31) 00:10:13.020 12511.418 - 12570.996: 95.9513% ( 36) 00:10:13.020 12570.996 - 12630.575: 96.2362% ( 33) 00:10:13.020 12630.575 - 12690.153: 96.4779% ( 28) 00:10:13.020 12690.153 - 12749.731: 96.7282% ( 29) 00:10:13.020 12749.731 - 12809.309: 96.9700% ( 28) 00:10:13.020 12809.309 - 12868.887: 97.2462% ( 32) 00:10:13.020 12868.887 - 12928.465: 97.5052% ( 30) 00:10:13.020 12928.465 - 12988.044: 97.6260% ( 14) 00:10:13.020 12988.044 - 13047.622: 97.7642% ( 16) 00:10:13.020 13047.622 - 13107.200: 97.8677% ( 12) 00:10:13.020 13107.200 - 13166.778: 97.9713% ( 12) 00:10:13.020 13166.778 - 13226.356: 98.0490% ( 9) 00:10:13.020 13226.356 - 13285.935: 98.1440% ( 11) 00:10:13.020 13285.935 - 13345.513: 98.2735% ( 15) 00:10:13.020 13345.513 - 13405.091: 98.4202% ( 17) 00:10:13.020 13405.091 - 13464.669: 98.5325% ( 13) 00:10:13.020 13464.669 - 13524.247: 98.6878% ( 18) 00:10:13.020 13524.247 - 13583.825: 98.7569% ( 8) 00:10:13.020 13583.825 - 13643.404: 98.8173% ( 7) 00:10:13.020 13643.404 - 13702.982: 98.8519% ( 4) 00:10:13.020 13702.982 - 13762.560: 98.8778% ( 3) 00:10:13.020 13762.560 - 13822.138: 98.8950% ( 2) 00:10:13.020 20614.051 - 20733.207: 98.9296% ( 4) 00:10:13.020 20733.207 - 20852.364: 99.0073% ( 9) 00:10:13.020 20852.364 - 20971.520: 99.0849% ( 9) 00:10:13.020 20971.520 - 21090.676: 99.1626% ( 9) 00:10:13.020 21090.676 - 21209.833: 99.1972% ( 4) 00:10:13.020 21209.833 - 21328.989: 99.2403% ( 5) 00:10:13.020 21328.989 - 21448.145: 99.2749% ( 4) 00:10:13.020 21448.145 - 21567.302: 99.3180% ( 5) 00:10:13.020 21567.302 - 21686.458: 99.3439% ( 3) 00:10:13.020 21686.458 - 21805.615: 99.3871% ( 5) 00:10:13.020 21805.615 - 21924.771: 99.4302% ( 5) 00:10:13.020 21924.771 - 22043.927: 99.4475% ( 2) 00:10:13.020 28120.902 - 28240.058: 99.4907% ( 5) 00:10:13.020 28240.058 - 28359.215: 99.5252% ( 4) 00:10:13.020 28359.215 - 28478.371: 99.5338% ( 1) 00:10:13.020 28954.996 - 29074.153: 99.5425% ( 1) 00:10:13.020 29074.153 - 29193.309: 99.5770% ( 4) 00:10:13.020 29193.309 - 29312.465: 99.6202% ( 5) 00:10:13.020 29312.465 - 29431.622: 99.6547% ( 4) 00:10:13.020 29431.622 - 29550.778: 99.6979% ( 5) 00:10:13.020 29550.778 - 29669.935: 99.7324% ( 4) 00:10:13.020 29669.935 - 29789.091: 99.7669% ( 4) 00:10:13.020 29789.091 - 29908.247: 99.8101% ( 5) 00:10:13.020 29908.247 - 30027.404: 99.8532% ( 5) 00:10:13.020 30027.404 - 30146.560: 99.8964% ( 5) 00:10:13.020 30146.560 - 30265.716: 99.9396% ( 5) 00:10:13.021 30265.716 - 30384.873: 99.9914% ( 6) 00:10:13.021 30384.873 - 30504.029: 100.0000% ( 1) 00:10:13.021 00:10:13.021 13:09:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:13.021 00:10:13.021 real 0m2.576s 00:10:13.021 user 0m2.227s 00:10:13.021 sys 0m0.235s 00:10:13.021 13:09:09 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.021 ************************************ 00:10:13.021 END TEST nvme_perf 00:10:13.021 13:09:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.021 ************************************ 00:10:13.279 13:09:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:13.279 13:09:09 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:13.279 13:09:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.279 13:09:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.279 ************************************ 00:10:13.279 START TEST nvme_hello_world 00:10:13.279 ************************************ 00:10:13.279 13:09:09 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:13.279 Initializing NVMe Controllers 00:10:13.279 Attached to 0000:00:10.0 00:10:13.279 Namespace ID: 1 size: 6GB 00:10:13.279 Attached to 0000:00:11.0 00:10:13.279 Namespace ID: 1 size: 5GB 00:10:13.279 Attached to 0000:00:13.0 00:10:13.279 Namespace ID: 1 size: 1GB 00:10:13.279 Attached to 0000:00:12.0 00:10:13.279 Namespace ID: 1 size: 4GB 00:10:13.279 Namespace ID: 2 size: 4GB 00:10:13.279 Namespace ID: 3 size: 4GB 00:10:13.279 Initialization complete. 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.279 INFO: using host memory buffer for IO 00:10:13.279 Hello world! 00:10:13.537 00:10:13.537 real 0m0.257s 00:10:13.537 user 0m0.099s 00:10:13.537 sys 0m0.114s 00:10:13.537 13:09:10 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.537 13:09:10 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:13.537 ************************************ 00:10:13.537 END TEST nvme_hello_world 00:10:13.537 ************************************ 00:10:13.537 13:09:10 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:13.537 13:09:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:13.537 13:09:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.537 13:09:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.537 ************************************ 00:10:13.537 START TEST nvme_sgl 00:10:13.537 ************************************ 00:10:13.537 13:09:10 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:13.795 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:13.795 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:13.795 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:13.795 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:13.795 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:13.795 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:13.795 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:13.795 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:13.795 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:13.795 NVMe Readv/Writev Request test 00:10:13.795 Attached to 0000:00:10.0 00:10:13.795 Attached to 0000:00:11.0 00:10:13.795 Attached to 0000:00:13.0 00:10:13.795 Attached to 0000:00:12.0 00:10:13.795 0000:00:10.0: build_io_request_2 test passed 00:10:13.795 0000:00:10.0: build_io_request_4 test passed 00:10:13.795 0000:00:10.0: build_io_request_5 test passed 00:10:13.795 0000:00:10.0: build_io_request_6 test passed 00:10:13.795 0000:00:10.0: build_io_request_7 test passed 00:10:13.795 0000:00:10.0: build_io_request_10 test passed 00:10:13.795 0000:00:11.0: build_io_request_2 test passed 00:10:13.795 0000:00:11.0: build_io_request_4 test passed 00:10:13.795 0000:00:11.0: build_io_request_5 test passed 00:10:13.795 0000:00:11.0: build_io_request_6 test passed 00:10:13.795 0000:00:11.0: build_io_request_7 test passed 00:10:13.795 0000:00:11.0: build_io_request_10 test passed 00:10:13.795 Cleaning up... 00:10:13.795 00:10:13.795 real 0m0.347s 00:10:13.795 user 0m0.162s 00:10:13.795 sys 0m0.143s 00:10:13.795 13:09:10 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.795 13:09:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 ************************************ 00:10:13.795 END TEST nvme_sgl 00:10:13.795 ************************************ 00:10:13.795 13:09:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:13.795 13:09:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:13.795 13:09:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.795 13:09:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 ************************************ 00:10:13.795 START TEST nvme_e2edp 00:10:13.795 ************************************ 00:10:13.795 13:09:10 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:14.053 NVMe Write/Read with End-to-End data protection test 00:10:14.053 Attached to 0000:00:10.0 00:10:14.053 Attached to 0000:00:11.0 00:10:14.053 Attached to 0000:00:13.0 00:10:14.053 Attached to 0000:00:12.0 00:10:14.053 Cleaning up... 00:10:14.053 00:10:14.053 real 0m0.271s 00:10:14.053 user 0m0.103s 00:10:14.053 sys 0m0.117s 00:10:14.053 13:09:10 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.053 13:09:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:14.053 ************************************ 00:10:14.053 END TEST nvme_e2edp 00:10:14.053 ************************************ 00:10:14.311 13:09:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:14.311 13:09:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:14.311 13:09:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.311 13:09:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.311 ************************************ 00:10:14.311 START TEST nvme_reserve 00:10:14.311 ************************************ 00:10:14.311 13:09:10 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:14.311 ===================================================== 00:10:14.311 NVMe Controller at PCI bus 0, device 16, function 0 00:10:14.311 ===================================================== 00:10:14.311 Reservations: Not Supported 00:10:14.311 ===================================================== 00:10:14.311 NVMe Controller at PCI bus 0, device 17, function 0 00:10:14.311 ===================================================== 00:10:14.311 Reservations: Not Supported 00:10:14.311 ===================================================== 00:10:14.311 NVMe Controller at PCI bus 0, device 19, function 0 00:10:14.311 ===================================================== 00:10:14.311 Reservations: Not Supported 00:10:14.311 ===================================================== 00:10:14.311 NVMe Controller at PCI bus 0, device 18, function 0 00:10:14.311 ===================================================== 00:10:14.311 Reservations: Not Supported 00:10:14.311 Reservation test passed 00:10:14.569 00:10:14.569 real 0m0.250s 00:10:14.569 user 0m0.089s 00:10:14.569 sys 0m0.116s 00:10:14.569 13:09:11 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.569 ************************************ 00:10:14.569 13:09:11 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:14.569 END TEST nvme_reserve 00:10:14.569 ************************************ 00:10:14.569 13:09:11 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:14.569 13:09:11 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:14.569 13:09:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.570 13:09:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.570 ************************************ 00:10:14.570 START TEST nvme_err_injection 00:10:14.570 ************************************ 00:10:14.570 13:09:11 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:14.827 NVMe Error Injection test 00:10:14.827 Attached to 0000:00:10.0 00:10:14.827 Attached to 0000:00:11.0 00:10:14.827 Attached to 0000:00:13.0 00:10:14.827 Attached to 0000:00:12.0 00:10:14.827 0000:00:10.0: get features failed as expected 00:10:14.827 0000:00:11.0: get features failed as expected 00:10:14.827 0000:00:13.0: get features failed as expected 00:10:14.827 0000:00:12.0: get features failed as expected 00:10:14.827 0000:00:10.0: get features successfully as expected 00:10:14.827 0000:00:11.0: get features successfully as expected 00:10:14.827 0000:00:13.0: get features successfully as expected 00:10:14.827 0000:00:12.0: get features successfully as expected 00:10:14.827 0000:00:10.0: read failed as expected 00:10:14.827 0000:00:11.0: read failed as expected 00:10:14.827 0000:00:13.0: read failed as expected 00:10:14.827 0000:00:12.0: read failed as expected 00:10:14.827 0000:00:10.0: read successfully as expected 00:10:14.827 0000:00:11.0: read successfully as expected 00:10:14.827 0000:00:13.0: read successfully as expected 00:10:14.827 0000:00:12.0: read successfully as expected 00:10:14.827 Cleaning up... 00:10:14.827 00:10:14.827 real 0m0.292s 00:10:14.827 user 0m0.111s 00:10:14.827 sys 0m0.132s 00:10:14.827 13:09:11 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.827 ************************************ 00:10:14.827 13:09:11 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 END TEST nvme_err_injection 00:10:14.827 ************************************ 00:10:14.827 13:09:11 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:14.827 13:09:11 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:14.827 13:09:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.827 13:09:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 ************************************ 00:10:14.827 START TEST nvme_overhead 00:10:14.827 ************************************ 00:10:14.827 13:09:11 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:16.201 Initializing NVMe Controllers 00:10:16.201 Attached to 0000:00:10.0 00:10:16.201 Attached to 0000:00:11.0 00:10:16.201 Attached to 0000:00:13.0 00:10:16.201 Attached to 0000:00:12.0 00:10:16.201 Initialization complete. Launching workers. 00:10:16.201 submit (in ns) avg, min, max = 15683.6, 13169.5, 109592.7 00:10:16.201 complete (in ns) avg, min, max = 10315.6, 9082.7, 222347.3 00:10:16.201 00:10:16.201 Submit histogram 00:10:16.201 ================ 00:10:16.201 Range in us Cumulative Count 00:10:16.201 13.149 - 13.207: 0.0097% ( 1) 00:10:16.201 13.324 - 13.382: 0.0193% ( 1) 00:10:16.201 13.382 - 13.440: 0.0290% ( 1) 00:10:16.201 13.440 - 13.498: 0.0677% ( 4) 00:10:16.201 13.498 - 13.556: 0.1064% ( 4) 00:10:16.201 13.556 - 13.615: 0.1741% ( 7) 00:10:16.201 13.615 - 13.673: 0.3676% ( 20) 00:10:16.201 13.673 - 13.731: 0.6094% ( 25) 00:10:16.201 13.731 - 13.789: 0.8706% ( 27) 00:10:16.201 13.789 - 13.847: 1.2188% ( 36) 00:10:16.201 13.847 - 13.905: 1.5767% ( 37) 00:10:16.201 13.905 - 13.964: 1.8669% ( 30) 00:10:16.201 13.964 - 14.022: 2.1087% ( 25) 00:10:16.201 14.022 - 14.080: 2.3409% ( 24) 00:10:16.201 14.080 - 14.138: 2.4763% ( 14) 00:10:16.201 14.138 - 14.196: 2.6698% ( 20) 00:10:16.201 14.196 - 14.255: 2.7858% ( 12) 00:10:16.201 14.255 - 14.313: 2.9213% ( 14) 00:10:16.201 14.313 - 14.371: 3.0083% ( 9) 00:10:16.201 14.371 - 14.429: 3.2308% ( 23) 00:10:16.201 14.429 - 14.487: 3.9756% ( 77) 00:10:16.201 14.487 - 14.545: 5.6491% ( 173) 00:10:16.201 14.545 - 14.604: 9.3635% ( 384) 00:10:16.201 14.604 - 14.662: 14.9642% ( 579) 00:10:16.201 14.662 - 14.720: 22.8091% ( 811) 00:10:16.201 14.720 - 14.778: 31.6019% ( 909) 00:10:16.201 14.778 - 14.836: 40.8009% ( 951) 00:10:16.201 14.836 - 14.895: 48.4910% ( 795) 00:10:16.201 14.895 - 15.011: 61.6077% ( 1356) 00:10:16.201 15.011 - 15.127: 70.4005% ( 909) 00:10:16.201 15.127 - 15.244: 76.4171% ( 622) 00:10:16.201 15.244 - 15.360: 80.9634% ( 470) 00:10:16.201 15.360 - 15.476: 83.9911% ( 313) 00:10:16.201 15.476 - 15.593: 85.8290% ( 190) 00:10:16.201 15.593 - 15.709: 87.4831% ( 171) 00:10:16.201 15.709 - 15.825: 88.4214% ( 97) 00:10:16.201 15.825 - 15.942: 89.1759% ( 78) 00:10:16.201 15.942 - 16.058: 89.8046% ( 65) 00:10:16.201 16.058 - 16.175: 90.3173% ( 53) 00:10:16.201 16.175 - 16.291: 90.5978% ( 29) 00:10:16.201 16.291 - 16.407: 90.8106% ( 22) 00:10:16.201 16.407 - 16.524: 91.0621% ( 26) 00:10:16.201 16.524 - 16.640: 91.2072% ( 15) 00:10:16.201 16.640 - 16.756: 91.3039% ( 10) 00:10:16.201 16.756 - 16.873: 91.3426% ( 4) 00:10:16.201 16.873 - 16.989: 91.4393% ( 10) 00:10:16.201 16.989 - 17.105: 91.4974% ( 6) 00:10:16.201 17.105 - 17.222: 91.5361% ( 4) 00:10:16.201 17.222 - 17.338: 91.5554% ( 2) 00:10:16.201 17.338 - 17.455: 91.5844% ( 3) 00:10:16.201 17.455 - 17.571: 91.6328% ( 5) 00:10:16.201 17.571 - 17.687: 91.6522% ( 2) 00:10:16.201 17.687 - 17.804: 91.6715% ( 2) 00:10:16.201 17.804 - 17.920: 91.7102% ( 4) 00:10:16.201 17.920 - 18.036: 91.7392% ( 3) 00:10:16.201 18.153 - 18.269: 91.7682% ( 3) 00:10:16.201 18.269 - 18.385: 91.7876% ( 2) 00:10:16.201 18.385 - 18.502: 91.7973% ( 1) 00:10:16.201 18.502 - 18.618: 91.8456% ( 5) 00:10:16.201 18.618 - 18.735: 91.8650% ( 2) 00:10:16.201 18.735 - 18.851: 91.8843% ( 2) 00:10:16.201 18.851 - 18.967: 91.9230% ( 4) 00:10:16.201 18.967 - 19.084: 91.9327% ( 1) 00:10:16.201 19.084 - 19.200: 91.9617% ( 3) 00:10:16.201 19.200 - 19.316: 92.0004% ( 4) 00:10:16.201 19.316 - 19.433: 92.0391% ( 4) 00:10:16.201 19.433 - 19.549: 92.1455% ( 11) 00:10:16.201 19.549 - 19.665: 92.2132% ( 7) 00:10:16.201 19.665 - 19.782: 92.2712% ( 6) 00:10:16.201 19.782 - 19.898: 92.3680% ( 10) 00:10:16.201 19.898 - 20.015: 92.5227% ( 16) 00:10:16.201 20.015 - 20.131: 92.6388% ( 12) 00:10:16.201 20.131 - 20.247: 92.7936% ( 16) 00:10:16.201 20.247 - 20.364: 93.0741% ( 29) 00:10:16.201 20.364 - 20.480: 93.3159% ( 25) 00:10:16.201 20.480 - 20.596: 93.5287% ( 22) 00:10:16.201 20.596 - 20.713: 93.6738% ( 15) 00:10:16.201 20.713 - 20.829: 93.8963% ( 23) 00:10:16.201 20.829 - 20.945: 94.1091% ( 22) 00:10:16.201 20.945 - 21.062: 94.3122% ( 21) 00:10:16.201 21.062 - 21.178: 94.4477% ( 14) 00:10:16.201 21.178 - 21.295: 94.6605% ( 22) 00:10:16.201 21.295 - 21.411: 94.8056% ( 15) 00:10:16.201 21.411 - 21.527: 94.9120% ( 11) 00:10:16.201 21.527 - 21.644: 95.0571% ( 15) 00:10:16.201 21.644 - 21.760: 95.2022% ( 15) 00:10:16.201 21.760 - 21.876: 95.2989% ( 10) 00:10:16.201 21.876 - 21.993: 95.3956% ( 10) 00:10:16.201 21.993 - 22.109: 95.4924% ( 10) 00:10:16.201 22.109 - 22.225: 95.5988% ( 11) 00:10:16.201 22.225 - 22.342: 95.7148% ( 12) 00:10:16.201 22.342 - 22.458: 95.7825% ( 7) 00:10:16.201 22.458 - 22.575: 95.8793% ( 10) 00:10:16.201 22.575 - 22.691: 95.9760% ( 10) 00:10:16.201 22.691 - 22.807: 96.1405% ( 17) 00:10:16.201 22.807 - 22.924: 96.2662% ( 13) 00:10:16.201 22.924 - 23.040: 96.4016% ( 14) 00:10:16.201 23.040 - 23.156: 96.5467% ( 15) 00:10:16.201 23.156 - 23.273: 96.6338% ( 9) 00:10:16.201 23.273 - 23.389: 96.7112% ( 8) 00:10:16.201 23.389 - 23.505: 96.8756% ( 17) 00:10:16.201 23.505 - 23.622: 96.9723% ( 10) 00:10:16.201 23.622 - 23.738: 97.0981% ( 13) 00:10:16.201 23.738 - 23.855: 97.1948% ( 10) 00:10:16.201 23.855 - 23.971: 97.2529% ( 6) 00:10:16.201 23.971 - 24.087: 97.3109% ( 6) 00:10:16.201 24.087 - 24.204: 97.3979% ( 9) 00:10:16.201 24.204 - 24.320: 97.4657% ( 7) 00:10:16.201 24.320 - 24.436: 97.5914% ( 13) 00:10:16.201 24.436 - 24.553: 97.6881% ( 10) 00:10:16.201 24.553 - 24.669: 97.7655% ( 8) 00:10:16.201 24.669 - 24.785: 97.8332% ( 7) 00:10:16.201 24.785 - 24.902: 97.9106% ( 8) 00:10:16.201 24.902 - 25.018: 98.0074% ( 10) 00:10:16.201 25.018 - 25.135: 98.1234% ( 12) 00:10:16.201 25.135 - 25.251: 98.2008% ( 8) 00:10:16.201 25.251 - 25.367: 98.2685% ( 7) 00:10:16.201 25.367 - 25.484: 98.3749% ( 11) 00:10:16.201 25.484 - 25.600: 98.4233% ( 5) 00:10:16.202 25.600 - 25.716: 98.4813% ( 6) 00:10:16.202 25.716 - 25.833: 98.5104% ( 3) 00:10:16.202 25.833 - 25.949: 98.5490% ( 4) 00:10:16.202 25.949 - 26.065: 98.6168% ( 7) 00:10:16.202 26.065 - 26.182: 98.7038% ( 9) 00:10:16.202 26.182 - 26.298: 98.7425% ( 4) 00:10:16.202 26.298 - 26.415: 98.7909% ( 5) 00:10:16.202 26.415 - 26.531: 98.8779% ( 9) 00:10:16.202 26.531 - 26.647: 98.9263% ( 5) 00:10:16.202 26.647 - 26.764: 98.9843% ( 6) 00:10:16.202 26.880 - 26.996: 99.0230% ( 4) 00:10:16.202 26.996 - 27.113: 99.0714% ( 5) 00:10:16.202 27.229 - 27.345: 99.0907% ( 2) 00:10:16.202 27.345 - 27.462: 99.1198% ( 3) 00:10:16.202 27.462 - 27.578: 99.1391% ( 2) 00:10:16.202 27.578 - 27.695: 99.1778% ( 4) 00:10:16.202 27.695 - 27.811: 99.1875% ( 1) 00:10:16.202 27.811 - 27.927: 99.1971% ( 1) 00:10:16.202 27.927 - 28.044: 99.2262% ( 3) 00:10:16.202 28.160 - 28.276: 99.2455% ( 2) 00:10:16.202 28.276 - 28.393: 99.2552% ( 1) 00:10:16.202 28.509 - 28.625: 99.2745% ( 2) 00:10:16.202 28.625 - 28.742: 99.2939% ( 2) 00:10:16.202 28.742 - 28.858: 99.3132% ( 2) 00:10:16.202 28.858 - 28.975: 99.3229% ( 1) 00:10:16.202 29.091 - 29.207: 99.3326% ( 1) 00:10:16.202 29.440 - 29.556: 99.3422% ( 1) 00:10:16.202 29.556 - 29.673: 99.3519% ( 1) 00:10:16.202 29.673 - 29.789: 99.3616% ( 1) 00:10:16.202 29.789 - 30.022: 99.4196% ( 6) 00:10:16.202 30.022 - 30.255: 99.4293% ( 1) 00:10:16.202 30.255 - 30.487: 99.4777% ( 5) 00:10:16.202 30.487 - 30.720: 99.4970% ( 2) 00:10:16.202 30.720 - 30.953: 99.5357% ( 4) 00:10:16.202 30.953 - 31.185: 99.5744% ( 4) 00:10:16.202 31.418 - 31.651: 99.6131% ( 4) 00:10:16.202 31.651 - 31.884: 99.6518% ( 4) 00:10:16.202 31.884 - 32.116: 99.6614% ( 1) 00:10:16.202 32.116 - 32.349: 99.6808% ( 2) 00:10:16.202 32.349 - 32.582: 99.7195% ( 4) 00:10:16.202 32.815 - 33.047: 99.7388% ( 2) 00:10:16.202 33.047 - 33.280: 99.7485% ( 1) 00:10:16.202 33.513 - 33.745: 99.7582% ( 1) 00:10:16.202 34.444 - 34.676: 99.7678% ( 1) 00:10:16.202 34.676 - 34.909: 99.7775% ( 1) 00:10:16.202 34.909 - 35.142: 99.7872% ( 1) 00:10:16.202 35.607 - 35.840: 99.7969% ( 1) 00:10:16.202 36.073 - 36.305: 99.8065% ( 1) 00:10:16.202 37.236 - 37.469: 99.8162% ( 1) 00:10:16.202 37.702 - 37.935: 99.8259% ( 1) 00:10:16.202 37.935 - 38.167: 99.8356% ( 1) 00:10:16.202 38.865 - 39.098: 99.8452% ( 1) 00:10:16.202 39.098 - 39.331: 99.8549% ( 1) 00:10:16.202 40.262 - 40.495: 99.8646% ( 1) 00:10:16.202 41.425 - 41.658: 99.8743% ( 1) 00:10:16.202 43.985 - 44.218: 99.8839% ( 1) 00:10:16.202 46.313 - 46.545: 99.8936% ( 1) 00:10:16.202 46.778 - 47.011: 99.9033% ( 1) 00:10:16.202 48.175 - 48.407: 99.9129% ( 1) 00:10:16.202 50.502 - 50.735: 99.9226% ( 1) 00:10:16.202 50.735 - 50.967: 99.9323% ( 1) 00:10:16.202 52.596 - 52.829: 99.9420% ( 1) 00:10:16.202 61.905 - 62.371: 99.9516% ( 1) 00:10:16.202 72.145 - 72.611: 99.9613% ( 1) 00:10:16.202 79.593 - 80.058: 99.9710% ( 1) 00:10:16.202 94.487 - 94.953: 99.9807% ( 1) 00:10:16.202 106.589 - 107.055: 99.9903% ( 1) 00:10:16.202 109.382 - 109.847: 100.0000% ( 1) 00:10:16.202 00:10:16.202 Complete histogram 00:10:16.202 ================== 00:10:16.202 Range in us Cumulative Count 00:10:16.202 9.076 - 9.135: 0.0097% ( 1) 00:10:16.202 9.251 - 9.309: 0.0193% ( 1) 00:10:16.202 9.309 - 9.367: 0.0290% ( 1) 00:10:16.202 9.367 - 9.425: 0.0774% ( 5) 00:10:16.202 9.425 - 9.484: 0.6868% ( 63) 00:10:16.202 9.484 - 9.542: 3.6661% ( 308) 00:10:16.202 9.542 - 9.600: 11.4722% ( 807) 00:10:16.202 9.600 - 9.658: 25.1693% ( 1416) 00:10:16.202 9.658 - 9.716: 39.8336% ( 1516) 00:10:16.202 9.716 - 9.775: 53.8595% ( 1450) 00:10:16.202 9.775 - 9.833: 64.4322% ( 1093) 00:10:16.202 9.833 - 9.891: 71.7644% ( 758) 00:10:16.202 9.891 - 9.949: 75.8754% ( 425) 00:10:16.202 9.949 - 10.007: 78.5645% ( 278) 00:10:16.202 10.007 - 10.065: 80.2573% ( 175) 00:10:16.202 10.065 - 10.124: 81.3891% ( 117) 00:10:16.202 10.124 - 10.182: 82.1726% ( 81) 00:10:16.202 10.182 - 10.240: 82.6272% ( 47) 00:10:16.202 10.240 - 10.298: 83.2946% ( 69) 00:10:16.202 10.298 - 10.356: 83.7686% ( 49) 00:10:16.202 10.356 - 10.415: 84.5425% ( 80) 00:10:16.202 10.415 - 10.473: 85.4711% ( 96) 00:10:16.202 10.473 - 10.531: 86.5738% ( 114) 00:10:16.202 10.531 - 10.589: 87.6862% ( 115) 00:10:16.202 10.589 - 10.647: 88.6729% ( 102) 00:10:16.202 10.647 - 10.705: 89.7079% ( 107) 00:10:16.202 10.705 - 10.764: 90.3947% ( 71) 00:10:16.202 10.764 - 10.822: 91.1395% ( 77) 00:10:16.202 10.822 - 10.880: 91.5941% ( 47) 00:10:16.202 10.880 - 10.938: 91.9133% ( 33) 00:10:16.202 10.938 - 10.996: 92.1165% ( 21) 00:10:16.202 10.996 - 11.055: 92.2712% ( 16) 00:10:16.202 11.055 - 11.113: 92.3873% ( 12) 00:10:16.202 11.113 - 11.171: 92.4357% ( 5) 00:10:16.202 11.171 - 11.229: 92.4840% ( 5) 00:10:16.202 11.229 - 11.287: 92.5131% ( 3) 00:10:16.202 11.287 - 11.345: 92.5324% ( 2) 00:10:16.202 11.345 - 11.404: 92.6098% ( 8) 00:10:16.202 11.404 - 11.462: 92.6968% ( 9) 00:10:16.202 11.462 - 11.520: 92.7452% ( 5) 00:10:16.202 11.520 - 11.578: 92.7839% ( 4) 00:10:16.202 11.578 - 11.636: 92.8613% ( 8) 00:10:16.202 11.636 - 11.695: 92.9387% ( 8) 00:10:16.202 11.695 - 11.753: 93.0741% ( 14) 00:10:16.202 11.753 - 11.811: 93.1225% ( 5) 00:10:16.202 11.811 - 11.869: 93.2966% ( 18) 00:10:16.202 11.869 - 11.927: 93.4417% ( 15) 00:10:16.202 11.927 - 11.985: 93.6061% ( 17) 00:10:16.202 11.985 - 12.044: 93.7609% ( 16) 00:10:16.202 12.044 - 12.102: 94.0027% ( 25) 00:10:16.202 12.102 - 12.160: 94.2445% ( 25) 00:10:16.202 12.160 - 12.218: 94.3800% ( 14) 00:10:16.202 12.218 - 12.276: 94.5251% ( 15) 00:10:16.202 12.276 - 12.335: 94.6411% ( 12) 00:10:16.202 12.335 - 12.393: 94.6992% ( 6) 00:10:16.202 12.393 - 12.451: 94.7282% ( 3) 00:10:16.202 12.451 - 12.509: 94.7862% ( 6) 00:10:16.202 12.509 - 12.567: 94.9120% ( 13) 00:10:16.202 12.567 - 12.625: 94.9507% ( 4) 00:10:16.202 12.625 - 12.684: 95.0184% ( 7) 00:10:16.202 12.684 - 12.742: 95.0281% ( 1) 00:10:16.202 12.800 - 12.858: 95.0571% ( 3) 00:10:16.202 12.858 - 12.916: 95.0958% ( 4) 00:10:16.202 12.975 - 13.033: 95.1054% ( 1) 00:10:16.202 13.033 - 13.091: 95.1151% ( 1) 00:10:16.202 13.149 - 13.207: 95.1345% ( 2) 00:10:16.202 13.207 - 13.265: 95.1538% ( 2) 00:10:16.202 13.265 - 13.324: 95.1731% ( 2) 00:10:16.202 13.324 - 13.382: 95.1925% ( 2) 00:10:16.202 13.382 - 13.440: 95.2022% ( 1) 00:10:16.202 13.440 - 13.498: 95.2215% ( 2) 00:10:16.202 13.498 - 13.556: 95.2409% ( 2) 00:10:16.202 13.556 - 13.615: 95.2796% ( 4) 00:10:16.202 13.615 - 13.673: 95.2989% ( 2) 00:10:16.202 13.673 - 13.731: 95.3086% ( 1) 00:10:16.202 13.731 - 13.789: 95.3182% ( 1) 00:10:16.202 13.789 - 13.847: 95.3376% ( 2) 00:10:16.202 13.847 - 13.905: 95.3666% ( 3) 00:10:16.202 13.905 - 13.964: 95.3956% ( 3) 00:10:16.202 13.964 - 14.022: 95.4246% ( 3) 00:10:16.202 14.022 - 14.080: 95.4343% ( 1) 00:10:16.202 14.080 - 14.138: 95.4537% ( 2) 00:10:16.202 14.138 - 14.196: 95.4730% ( 2) 00:10:16.202 14.255 - 14.313: 95.4827% ( 1) 00:10:16.202 14.313 - 14.371: 95.4924% ( 1) 00:10:16.202 14.371 - 14.429: 95.5117% ( 2) 00:10:16.202 14.429 - 14.487: 95.5311% ( 2) 00:10:16.202 14.487 - 14.545: 95.5407% ( 1) 00:10:16.202 14.604 - 14.662: 95.5504% ( 1) 00:10:16.202 14.662 - 14.720: 95.5697% ( 2) 00:10:16.202 14.720 - 14.778: 95.5794% ( 1) 00:10:16.202 14.778 - 14.836: 95.5988% ( 2) 00:10:16.202 14.836 - 14.895: 95.6084% ( 1) 00:10:16.202 14.895 - 15.011: 95.6278% ( 2) 00:10:16.202 15.011 - 15.127: 95.6568% ( 3) 00:10:16.202 15.127 - 15.244: 95.6955% ( 4) 00:10:16.202 15.244 - 15.360: 95.7535% ( 6) 00:10:16.202 15.360 - 15.476: 95.8599% ( 11) 00:10:16.202 15.476 - 15.593: 95.9083% ( 5) 00:10:16.202 15.593 - 15.709: 95.9954% ( 9) 00:10:16.202 15.709 - 15.825: 96.0631% ( 7) 00:10:16.202 15.825 - 15.942: 96.1888% ( 13) 00:10:16.202 15.942 - 16.058: 96.2952% ( 11) 00:10:16.202 16.058 - 16.175: 96.4790% ( 19) 00:10:16.202 16.175 - 16.291: 96.5757% ( 10) 00:10:16.202 16.291 - 16.407: 96.6821% ( 11) 00:10:16.202 16.407 - 16.524: 96.7885% ( 11) 00:10:16.202 16.524 - 16.640: 96.8950% ( 11) 00:10:16.202 16.640 - 16.756: 96.9917% ( 10) 00:10:16.202 16.756 - 16.873: 97.1851% ( 20) 00:10:16.202 16.873 - 16.989: 97.3109% ( 13) 00:10:16.202 16.989 - 17.105: 97.4657% ( 16) 00:10:16.202 17.105 - 17.222: 97.5721% ( 11) 00:10:16.202 17.222 - 17.338: 97.6688% ( 10) 00:10:16.202 17.338 - 17.455: 97.8139% ( 15) 00:10:16.202 17.455 - 17.571: 97.9977% ( 19) 00:10:16.202 17.571 - 17.687: 98.1041% ( 11) 00:10:16.202 17.687 - 17.804: 98.2298% ( 13) 00:10:16.202 17.804 - 17.920: 98.3072% ( 8) 00:10:16.202 17.920 - 18.036: 98.3943% ( 9) 00:10:16.202 18.036 - 18.153: 98.4330% ( 4) 00:10:16.202 18.153 - 18.269: 98.4813% ( 5) 00:10:16.202 18.269 - 18.385: 98.5394% ( 6) 00:10:16.202 18.385 - 18.502: 98.6071% ( 7) 00:10:16.202 18.502 - 18.618: 98.6554% ( 5) 00:10:16.202 18.618 - 18.735: 98.7038% ( 5) 00:10:16.202 18.735 - 18.851: 98.7425% ( 4) 00:10:16.202 18.851 - 18.967: 98.7812% ( 4) 00:10:16.203 18.967 - 19.084: 98.8296% ( 5) 00:10:16.203 19.084 - 19.200: 98.8683% ( 4) 00:10:16.203 19.200 - 19.316: 98.9456% ( 8) 00:10:16.203 19.316 - 19.433: 98.9940% ( 5) 00:10:16.203 19.433 - 19.549: 99.0133% ( 2) 00:10:16.203 19.549 - 19.665: 99.0424% ( 3) 00:10:16.203 19.665 - 19.782: 99.0907% ( 5) 00:10:16.203 19.782 - 19.898: 99.1488% ( 6) 00:10:16.203 19.898 - 20.015: 99.1584% ( 1) 00:10:16.203 20.015 - 20.131: 99.1971% ( 4) 00:10:16.203 20.131 - 20.247: 99.2262% ( 3) 00:10:16.203 20.364 - 20.480: 99.2455% ( 2) 00:10:16.203 20.480 - 20.596: 99.2745% ( 3) 00:10:16.203 20.596 - 20.713: 99.2842% ( 1) 00:10:16.203 20.713 - 20.829: 99.3035% ( 2) 00:10:16.203 20.829 - 20.945: 99.3229% ( 2) 00:10:16.203 21.527 - 21.644: 99.3326% ( 1) 00:10:16.203 21.644 - 21.760: 99.3616% ( 3) 00:10:16.203 21.760 - 21.876: 99.4099% ( 5) 00:10:16.203 21.876 - 21.993: 99.4293% ( 2) 00:10:16.203 22.109 - 22.225: 99.4390% ( 1) 00:10:16.203 22.458 - 22.575: 99.4486% ( 1) 00:10:16.203 23.040 - 23.156: 99.4583% ( 1) 00:10:16.203 23.389 - 23.505: 99.4680% ( 1) 00:10:16.203 23.622 - 23.738: 99.4970% ( 3) 00:10:16.203 24.087 - 24.204: 99.5067% ( 1) 00:10:16.203 24.204 - 24.320: 99.5163% ( 1) 00:10:16.203 24.436 - 24.553: 99.5260% ( 1) 00:10:16.203 24.785 - 24.902: 99.5744% ( 5) 00:10:16.203 24.902 - 25.018: 99.6228% ( 5) 00:10:16.203 25.135 - 25.251: 99.6324% ( 1) 00:10:16.203 25.251 - 25.367: 99.6421% ( 1) 00:10:16.203 25.484 - 25.600: 99.6614% ( 2) 00:10:16.203 25.600 - 25.716: 99.6905% ( 3) 00:10:16.203 25.716 - 25.833: 99.7292% ( 4) 00:10:16.203 25.949 - 26.065: 99.7485% ( 2) 00:10:16.203 26.065 - 26.182: 99.7678% ( 2) 00:10:16.203 26.182 - 26.298: 99.7775% ( 1) 00:10:16.203 26.298 - 26.415: 99.7872% ( 1) 00:10:16.203 26.415 - 26.531: 99.7969% ( 1) 00:10:16.203 26.764 - 26.880: 99.8259% ( 3) 00:10:16.203 26.996 - 27.113: 99.8452% ( 2) 00:10:16.203 27.345 - 27.462: 99.8549% ( 1) 00:10:16.203 27.578 - 27.695: 99.8646% ( 1) 00:10:16.203 28.393 - 28.509: 99.8839% ( 2) 00:10:16.203 29.556 - 29.673: 99.8936% ( 1) 00:10:16.203 30.022 - 30.255: 99.9129% ( 2) 00:10:16.203 30.255 - 30.487: 99.9226% ( 1) 00:10:16.203 30.720 - 30.953: 99.9323% ( 1) 00:10:16.203 30.953 - 31.185: 99.9420% ( 1) 00:10:16.203 31.185 - 31.418: 99.9516% ( 1) 00:10:16.203 33.047 - 33.280: 99.9613% ( 1) 00:10:16.203 33.280 - 33.513: 99.9710% ( 1) 00:10:16.203 40.029 - 40.262: 99.9807% ( 1) 00:10:16.203 41.658 - 41.891: 99.9903% ( 1) 00:10:16.203 221.556 - 222.487: 100.0000% ( 1) 00:10:16.203 00:10:16.203 00:10:16.203 real 0m1.258s 00:10:16.203 user 0m1.077s 00:10:16.203 sys 0m0.121s 00:10:16.203 13:09:12 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.203 13:09:12 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:16.203 ************************************ 00:10:16.203 END TEST nvme_overhead 00:10:16.203 ************************************ 00:10:16.203 13:09:12 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:16.203 13:09:12 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:16.203 13:09:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.203 13:09:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.203 ************************************ 00:10:16.203 START TEST nvme_arbitration 00:10:16.203 ************************************ 00:10:16.203 13:09:12 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:19.484 Initializing NVMe Controllers 00:10:19.484 Attached to 0000:00:10.0 00:10:19.484 Attached to 0000:00:11.0 00:10:19.484 Attached to 0000:00:13.0 00:10:19.484 Attached to 0000:00:12.0 00:10:19.484 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:19.484 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:19.484 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:19.484 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:19.484 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:19.484 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:19.484 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:19.484 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:19.484 Initialization complete. Launching workers. 00:10:19.484 Starting thread on core 1 with urgent priority queue 00:10:19.484 Starting thread on core 2 with urgent priority queue 00:10:19.484 Starting thread on core 3 with urgent priority queue 00:10:19.484 Starting thread on core 0 with urgent priority queue 00:10:19.484 QEMU NVMe Ctrl (12340 ) core 0: 3733.33 IO/s 26.79 secs/100000 ios 00:10:19.484 QEMU NVMe Ctrl (12342 ) core 0: 3733.33 IO/s 26.79 secs/100000 ios 00:10:19.484 QEMU NVMe Ctrl (12341 ) core 1: 3904.00 IO/s 25.61 secs/100000 ios 00:10:19.484 QEMU NVMe Ctrl (12342 ) core 1: 3904.00 IO/s 25.61 secs/100000 ios 00:10:19.484 QEMU NVMe Ctrl (12343 ) core 2: 3968.00 IO/s 25.20 secs/100000 ios 00:10:19.484 QEMU NVMe Ctrl (12342 ) core 3: 3968.00 IO/s 25.20 secs/100000 ios 00:10:19.484 ======================================================== 00:10:19.484 00:10:19.484 00:10:19.484 real 0m3.295s 00:10:19.484 user 0m9.053s 00:10:19.484 sys 0m0.146s 00:10:19.484 13:09:16 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.484 13:09:16 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 ************************************ 00:10:19.484 END TEST nvme_arbitration 00:10:19.484 ************************************ 00:10:19.484 13:09:16 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:19.484 13:09:16 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:19.484 13:09:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.484 13:09:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 ************************************ 00:10:19.484 START TEST nvme_single_aen 00:10:19.484 ************************************ 00:10:19.484 13:09:16 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:19.741 Asynchronous Event Request test 00:10:19.741 Attached to 0000:00:10.0 00:10:19.741 Attached to 0000:00:11.0 00:10:19.741 Attached to 0000:00:13.0 00:10:19.741 Attached to 0000:00:12.0 00:10:19.741 Reset controller to setup AER completions for this process 00:10:19.741 Registering asynchronous event callbacks... 00:10:19.741 Getting orig temperature thresholds of all controllers 00:10:19.741 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.741 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.741 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.741 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.741 Setting all controllers temperature threshold low to trigger AER 00:10:19.741 Waiting for all controllers temperature threshold to be set lower 00:10:19.741 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.741 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:19.741 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.741 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:19.741 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.741 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:19.741 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.741 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:19.741 Waiting for all controllers to trigger AER and reset threshold 00:10:19.741 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.741 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.741 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.741 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.741 Cleaning up... 00:10:19.741 00:10:19.741 real 0m0.299s 00:10:19.741 user 0m0.104s 00:10:19.741 sys 0m0.144s 00:10:19.741 13:09:16 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.741 13:09:16 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:19.741 ************************************ 00:10:19.741 END TEST nvme_single_aen 00:10:19.741 ************************************ 00:10:19.741 13:09:16 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:19.741 13:09:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:19.741 13:09:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.741 13:09:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.741 ************************************ 00:10:19.741 START TEST nvme_doorbell_aers 00:10:19.741 ************************************ 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:19.741 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:19.998 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:19.998 13:09:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:19.998 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:19.998 13:09:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:20.256 [2024-07-15 13:09:16.740050] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:10:30.211 Executing: test_write_invalid_db 00:10:30.211 Waiting for AER completion... 00:10:30.211 Failure: test_write_invalid_db 00:10:30.211 00:10:30.211 Executing: test_invalid_db_write_overflow_sq 00:10:30.211 Waiting for AER completion... 00:10:30.211 Failure: test_invalid_db_write_overflow_sq 00:10:30.211 00:10:30.211 Executing: test_invalid_db_write_overflow_cq 00:10:30.211 Waiting for AER completion... 00:10:30.211 Failure: test_invalid_db_write_overflow_cq 00:10:30.211 00:10:30.211 13:09:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:30.211 13:09:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:30.211 [2024-07-15 13:09:26.803236] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:10:40.218 Executing: test_write_invalid_db 00:10:40.218 Waiting for AER completion... 00:10:40.218 Failure: test_write_invalid_db 00:10:40.218 00:10:40.218 Executing: test_invalid_db_write_overflow_sq 00:10:40.218 Waiting for AER completion... 00:10:40.218 Failure: test_invalid_db_write_overflow_sq 00:10:40.218 00:10:40.218 Executing: test_invalid_db_write_overflow_cq 00:10:40.218 Waiting for AER completion... 00:10:40.218 Failure: test_invalid_db_write_overflow_cq 00:10:40.218 00:10:40.218 13:09:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:40.218 13:09:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:40.218 [2024-07-15 13:09:36.815979] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:10:50.217 Executing: test_write_invalid_db 00:10:50.217 Waiting for AER completion... 00:10:50.217 Failure: test_write_invalid_db 00:10:50.217 00:10:50.217 Executing: test_invalid_db_write_overflow_sq 00:10:50.217 Waiting for AER completion... 00:10:50.217 Failure: test_invalid_db_write_overflow_sq 00:10:50.217 00:10:50.217 Executing: test_invalid_db_write_overflow_cq 00:10:50.217 Waiting for AER completion... 00:10:50.217 Failure: test_invalid_db_write_overflow_cq 00:10:50.217 00:10:50.217 13:09:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:50.217 13:09:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:50.217 [2024-07-15 13:09:46.883673] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.193 Executing: test_write_invalid_db 00:11:00.193 Waiting for AER completion... 00:11:00.193 Failure: test_write_invalid_db 00:11:00.193 00:11:00.193 Executing: test_invalid_db_write_overflow_sq 00:11:00.193 Waiting for AER completion... 00:11:00.193 Failure: test_invalid_db_write_overflow_sq 00:11:00.193 00:11:00.193 Executing: test_invalid_db_write_overflow_cq 00:11:00.193 Waiting for AER completion... 00:11:00.193 Failure: test_invalid_db_write_overflow_cq 00:11:00.193 00:11:00.193 00:11:00.193 real 0m40.238s 00:11:00.193 user 0m34.233s 00:11:00.193 sys 0m5.641s 00:11:00.193 ************************************ 00:11:00.193 END TEST nvme_doorbell_aers 00:11:00.193 ************************************ 00:11:00.193 13:09:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.193 13:09:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:00.193 13:09:56 nvme -- nvme/nvme.sh@97 -- # uname 00:11:00.193 13:09:56 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:00.193 13:09:56 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:00.193 13:09:56 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:00.193 13:09:56 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.193 13:09:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.193 ************************************ 00:11:00.193 START TEST nvme_multi_aen 00:11:00.193 ************************************ 00:11:00.193 13:09:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:00.449 [2024-07-15 13:09:56.949786] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.950698] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.950862] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.953029] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.953232] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.953344] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.955126] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.955318] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.955429] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.957098] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.449 [2024-07-15 13:09:56.957301] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.450 [2024-07-15 13:09:56.957411] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81003) is not found. Dropping the request. 00:11:00.450 Child process pid: 81519 00:11:00.707 [Child] Asynchronous Event Request test 00:11:00.707 [Child] Attached to 0000:00:10.0 00:11:00.707 [Child] Attached to 0000:00:11.0 00:11:00.707 [Child] Attached to 0000:00:13.0 00:11:00.707 [Child] Attached to 0000:00:12.0 00:11:00.707 [Child] Registering asynchronous event callbacks... 00:11:00.707 [Child] Getting orig temperature thresholds of all controllers 00:11:00.707 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:00.707 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.707 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.707 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.707 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.707 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.707 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.707 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.707 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.707 [Child] Cleaning up... 00:11:00.707 Asynchronous Event Request test 00:11:00.707 Attached to 0000:00:10.0 00:11:00.707 Attached to 0000:00:11.0 00:11:00.707 Attached to 0000:00:13.0 00:11:00.707 Attached to 0000:00:12.0 00:11:00.707 Reset controller to setup AER completions for this process 00:11:00.707 Registering asynchronous event callbacks... 00:11:00.707 Getting orig temperature thresholds of all controllers 00:11:00.707 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.707 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:00.708 Setting all controllers temperature threshold low to trigger AER 00:11:00.708 Waiting for all controllers temperature threshold to be set lower 00:11:00.708 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.708 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:00.708 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.708 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:00.708 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.708 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:00.708 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:00.708 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:00.708 Waiting for all controllers to trigger AER and reset threshold 00:11:00.708 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.708 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.708 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.708 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:00.708 Cleaning up... 00:11:00.708 00:11:00.708 real 0m0.528s 00:11:00.708 user 0m0.180s 00:11:00.708 sys 0m0.228s 00:11:00.708 13:09:57 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.708 13:09:57 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:00.708 ************************************ 00:11:00.708 END TEST nvme_multi_aen 00:11:00.708 ************************************ 00:11:00.708 13:09:57 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:00.708 13:09:57 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:00.708 13:09:57 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.708 13:09:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.708 ************************************ 00:11:00.708 START TEST nvme_startup 00:11:00.708 ************************************ 00:11:00.708 13:09:57 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:00.965 Initializing NVMe Controllers 00:11:00.965 Attached to 0000:00:10.0 00:11:00.965 Attached to 0000:00:11.0 00:11:00.965 Attached to 0000:00:13.0 00:11:00.965 Attached to 0000:00:12.0 00:11:00.965 Initialization complete. 00:11:00.965 Time used:164468.984 (us). 00:11:00.965 00:11:00.965 real 0m0.247s 00:11:00.966 user 0m0.066s 00:11:00.966 sys 0m0.125s 00:11:00.966 13:09:57 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.966 ************************************ 00:11:00.966 13:09:57 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:00.966 END TEST nvme_startup 00:11:00.966 ************************************ 00:11:00.966 13:09:57 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:00.966 13:09:57 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:00.966 13:09:57 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.966 13:09:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.966 ************************************ 00:11:00.966 START TEST nvme_multi_secondary 00:11:00.966 ************************************ 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81569 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81570 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:00.966 13:09:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:04.248 Initializing NVMe Controllers 00:11:04.248 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:04.248 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:04.248 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:04.248 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:04.248 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:04.248 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:04.248 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:04.248 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:04.248 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:04.248 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:04.248 Initialization complete. Launching workers. 00:11:04.248 ======================================================== 00:11:04.248 Latency(us) 00:11:04.248 Device Information : IOPS MiB/s Average min max 00:11:04.248 PCIE (0000:00:10.0) NSID 1 from core 1: 5083.01 19.86 3145.61 1118.54 7514.14 00:11:04.248 PCIE (0000:00:11.0) NSID 1 from core 1: 5083.01 19.86 3147.12 1170.18 8054.26 00:11:04.248 PCIE (0000:00:13.0) NSID 1 from core 1: 5083.01 19.86 3147.06 1171.19 7239.38 00:11:04.248 PCIE (0000:00:12.0) NSID 1 from core 1: 5083.01 19.86 3146.94 1187.07 6973.92 00:11:04.248 PCIE (0000:00:12.0) NSID 2 from core 1: 5083.01 19.86 3146.92 1200.65 6947.02 00:11:04.248 PCIE (0000:00:12.0) NSID 3 from core 1: 5083.01 19.86 3146.85 955.21 7437.87 00:11:04.248 ======================================================== 00:11:04.248 Total : 30498.06 119.13 3146.75 955.21 8054.26 00:11:04.248 00:11:04.506 Initializing NVMe Controllers 00:11:04.506 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:04.506 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:04.506 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:04.506 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:04.506 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:04.506 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:04.506 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:04.506 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:04.506 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:04.506 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:04.506 Initialization complete. Launching workers. 00:11:04.506 ======================================================== 00:11:04.506 Latency(us) 00:11:04.506 Device Information : IOPS MiB/s Average min max 00:11:04.506 PCIE (0000:00:10.0) NSID 1 from core 2: 2073.17 8.10 7715.20 1987.56 16719.03 00:11:04.506 PCIE (0000:00:11.0) NSID 1 from core 2: 2073.17 8.10 7716.34 1995.56 17139.03 00:11:04.506 PCIE (0000:00:13.0) NSID 1 from core 2: 2073.17 8.10 7715.34 1783.30 17142.01 00:11:04.506 PCIE (0000:00:12.0) NSID 1 from core 2: 2073.17 8.10 7715.60 1733.07 14154.78 00:11:04.506 PCIE (0000:00:12.0) NSID 2 from core 2: 2078.50 8.12 7695.52 1623.39 16486.04 00:11:04.506 PCIE (0000:00:12.0) NSID 3 from core 2: 2073.17 8.10 7714.07 1238.85 16563.23 00:11:04.506 ======================================================== 00:11:04.506 Total : 12444.38 48.61 7712.00 1238.85 17142.01 00:11:04.506 00:11:04.506 13:10:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81569 00:11:06.405 Initializing NVMe Controllers 00:11:06.405 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:06.405 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:06.405 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:06.405 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:06.405 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:06.405 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:06.405 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:06.405 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:06.405 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:06.405 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:06.405 Initialization complete. Launching workers. 00:11:06.405 ======================================================== 00:11:06.405 Latency(us) 00:11:06.405 Device Information : IOPS MiB/s Average min max 00:11:06.405 PCIE (0000:00:10.0) NSID 1 from core 0: 7223.73 28.22 2213.05 980.46 8950.29 00:11:06.405 PCIE (0000:00:11.0) NSID 1 from core 0: 7223.73 28.22 2214.35 1006.76 9319.20 00:11:06.405 PCIE (0000:00:13.0) NSID 1 from core 0: 7223.73 28.22 2214.31 992.01 9644.75 00:11:06.405 PCIE (0000:00:12.0) NSID 1 from core 0: 7223.73 28.22 2214.26 1005.05 9993.67 00:11:06.405 PCIE (0000:00:12.0) NSID 2 from core 0: 7223.73 28.22 2214.18 1001.02 9127.61 00:11:06.405 PCIE (0000:00:12.0) NSID 3 from core 0: 7223.73 28.22 2214.11 923.76 9123.30 00:11:06.405 ======================================================== 00:11:06.405 Total : 43342.36 169.31 2214.04 923.76 9993.67 00:11:06.405 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81570 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81645 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81646 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:06.405 13:10:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:09.696 Initializing NVMe Controllers 00:11:09.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:09.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:09.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:09.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:09.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:09.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:09.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:09.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:09.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:09.696 Initialization complete. Launching workers. 00:11:09.696 ======================================================== 00:11:09.696 Latency(us) 00:11:09.696 Device Information : IOPS MiB/s Average min max 00:11:09.696 PCIE (0000:00:10.0) NSID 1 from core 0: 5198.85 20.31 3075.50 1119.93 7665.83 00:11:09.696 PCIE (0000:00:11.0) NSID 1 from core 0: 5198.85 20.31 3077.10 1154.67 7841.86 00:11:09.696 PCIE (0000:00:13.0) NSID 1 from core 0: 5198.85 20.31 3077.11 1100.38 7912.41 00:11:09.696 PCIE (0000:00:12.0) NSID 1 from core 0: 5198.85 20.31 3077.16 1129.29 7737.65 00:11:09.696 PCIE (0000:00:12.0) NSID 2 from core 0: 5198.85 20.31 3077.11 1144.74 7825.11 00:11:09.696 PCIE (0000:00:12.0) NSID 3 from core 0: 5198.85 20.31 3077.14 1135.48 7996.26 00:11:09.696 ======================================================== 00:11:09.696 Total : 31193.08 121.85 3076.85 1100.38 7996.26 00:11:09.696 00:11:09.954 Initializing NVMe Controllers 00:11:09.954 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:09.954 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:09.954 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:09.954 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.954 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:09.954 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:09.954 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:09.954 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:09.954 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:09.954 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:09.955 Initialization complete. Launching workers. 00:11:09.955 ======================================================== 00:11:09.955 Latency(us) 00:11:09.955 Device Information : IOPS MiB/s Average min max 00:11:09.955 PCIE (0000:00:10.0) NSID 1 from core 1: 4815.46 18.81 3320.26 1113.03 7756.78 00:11:09.955 PCIE (0000:00:11.0) NSID 1 from core 1: 4815.46 18.81 3322.13 1129.93 7528.11 00:11:09.955 PCIE (0000:00:13.0) NSID 1 from core 1: 4815.46 18.81 3322.19 1163.33 7369.16 00:11:09.955 PCIE (0000:00:12.0) NSID 1 from core 1: 4815.46 18.81 3322.00 1181.51 7354.88 00:11:09.955 PCIE (0000:00:12.0) NSID 2 from core 1: 4815.46 18.81 3321.70 1130.23 7031.33 00:11:09.955 PCIE (0000:00:12.0) NSID 3 from core 1: 4815.46 18.81 3321.51 1147.01 7487.64 00:11:09.955 ======================================================== 00:11:09.955 Total : 28892.75 112.86 3321.63 1113.03 7756.78 00:11:09.955 00:11:11.853 Initializing NVMe Controllers 00:11:11.853 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:11.853 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:11.853 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:11.853 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:11.853 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:11.853 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:11.853 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:11.853 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:11.853 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:11.853 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:11.853 Initialization complete. Launching workers. 00:11:11.853 ======================================================== 00:11:11.853 Latency(us) 00:11:11.853 Device Information : IOPS MiB/s Average min max 00:11:11.853 PCIE (0000:00:10.0) NSID 1 from core 2: 3279.31 12.81 4874.97 1035.78 17009.81 00:11:11.853 PCIE (0000:00:11.0) NSID 1 from core 2: 3279.31 12.81 4878.05 1029.26 17302.40 00:11:11.853 PCIE (0000:00:13.0) NSID 1 from core 2: 3279.31 12.81 4877.72 955.55 16286.10 00:11:11.853 PCIE (0000:00:12.0) NSID 1 from core 2: 3279.31 12.81 4878.32 876.80 20143.67 00:11:11.853 PCIE (0000:00:12.0) NSID 2 from core 2: 3279.31 12.81 4877.95 839.79 20437.54 00:11:11.853 PCIE (0000:00:12.0) NSID 3 from core 2: 3279.31 12.81 4877.87 770.72 19142.88 00:11:11.853 ======================================================== 00:11:11.853 Total : 19675.86 76.86 4877.48 770.72 20437.54 00:11:11.853 00:11:11.853 13:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81645 00:11:11.853 13:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81646 00:11:11.853 00:11:11.853 real 0m10.869s 00:11:11.853 user 0m18.407s 00:11:11.853 sys 0m0.790s 00:11:11.853 13:10:08 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.853 ************************************ 00:11:11.853 END TEST nvme_multi_secondary 00:11:11.853 ************************************ 00:11:11.853 13:10:08 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:11.853 13:10:08 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:11.853 13:10:08 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:11.853 13:10:08 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80587 ]] 00:11:11.853 13:10:08 nvme -- common/autotest_common.sh@1086 -- # kill 80587 00:11:11.853 13:10:08 nvme -- common/autotest_common.sh@1087 -- # wait 80587 00:11:11.853 [2024-07-15 13:10:08.522401] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.522494] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.522542] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.522574] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.523444] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.523496] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.523523] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.523546] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.524430] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.524512] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.524549] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.524590] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.525628] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.525719] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.525759] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:11.853 [2024-07-15 13:10:08.525820] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81518) is not found. Dropping the request. 00:11:12.111 13:10:08 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:11:12.111 13:10:08 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:11:12.111 13:10:08 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:12.111 13:10:08 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.111 13:10:08 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.111 13:10:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:12.111 ************************************ 00:11:12.111 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:12.111 ************************************ 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:12.111 * Looking for test storage... 00:11:12.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81793 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81793 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81793 ']' 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.111 13:10:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.369 [2024-07-15 13:10:08.947908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:12.369 [2024-07-15 13:10:08.948691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81793 ] 00:11:12.628 [2024-07-15 13:10:09.120668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.628 [2024-07-15 13:10:09.223817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.628 [2024-07-15 13:10:09.223915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.628 [2024-07-15 13:10:09.223999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.628 [2024-07-15 13:10:09.224074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.194 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:13.195 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:11:13.195 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:13.195 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.195 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:13.453 nvme0n1 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_CDit9.txt 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:13.453 true 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721049009 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81821 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:13.453 13:10:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:15.350 13:10:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:15.350 13:10:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.350 13:10:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:15.350 [2024-07-15 13:10:11.999084] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:15.350 [2024-07-15 13:10:11.999926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:15.350 [2024-07-15 13:10:11.999985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:15.350 [2024-07-15 13:10:12.000010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.350 [2024-07-15 13:10:12.001965] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81821 00:11:15.350 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81821 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81821 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:15.350 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_CDit9.txt 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_CDit9.txt 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81793 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81793 ']' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81793 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81793 00:11:15.612 killing process with pid 81793 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81793' 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81793 00:11:15.612 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81793 00:11:15.915 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:15.915 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:15.915 00:11:15.915 real 0m3.943s 00:11:15.915 user 0m13.823s 00:11:15.915 sys 0m0.679s 00:11:15.915 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:15.915 ************************************ 00:11:15.915 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:15.915 ************************************ 00:11:15.915 13:10:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:15.915 13:10:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:15.915 13:10:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:15.915 13:10:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:15.915 13:10:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.915 13:10:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.172 ************************************ 00:11:16.172 START TEST nvme_fio 00:11:16.172 ************************************ 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:16.172 13:10:12 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:16.172 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:16.430 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:16.430 13:10:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:16.688 13:10:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:16.688 13:10:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:16.688 13:10:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:16.946 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:16.946 fio-3.35 00:11:16.946 Starting 1 thread 00:11:21.128 00:11:21.128 test: (groupid=0, jobs=1): err= 0: pid=81946: Mon Jul 15 13:10:17 2024 00:11:21.128 read: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(120MiB/2001msec) 00:11:21.128 slat (nsec): min=4713, max=62067, avg=6997.56, stdev=2109.70 00:11:21.128 clat (usec): min=295, max=9103, avg=4153.07, stdev=647.97 00:11:21.128 lat (usec): min=301, max=9165, avg=4160.06, stdev=648.86 00:11:21.128 clat percentiles (usec): 00:11:21.128 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:11:21.128 | 30.00th=[ 3752], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4293], 00:11:21.128 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5080], 00:11:21.128 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7504], 00:11:21.128 | 99.99th=[ 8848] 00:11:21.128 bw ( KiB/s): min=55632, max=62000, per=96.20%, avg=58973.33, stdev=3195.64, samples=3 00:11:21.128 iops : min=13908, max=15500, avg=14743.33, stdev=798.91, samples=3 00:11:21.128 write: IOPS=15.4k, BW=60.0MiB/s (62.9MB/s)(120MiB/2001msec); 0 zone resets 00:11:21.128 slat (nsec): min=4784, max=50866, avg=7156.78, stdev=2127.39 00:11:21.128 clat (usec): min=363, max=8940, avg=4160.02, stdev=648.92 00:11:21.128 lat (usec): min=374, max=8951, avg=4167.18, stdev=649.81 00:11:21.128 clat percentiles (usec): 00:11:21.128 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3589], 00:11:21.128 | 30.00th=[ 3752], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4293], 00:11:21.128 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5080], 00:11:21.128 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7570], 00:11:21.128 | 99.99th=[ 8586] 00:11:21.128 bw ( KiB/s): min=55048, max=61696, per=95.73%, avg=58789.33, stdev=3401.69, samples=3 00:11:21.128 iops : min=13762, max=15424, avg=14697.33, stdev=850.42, samples=3 00:11:21.128 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:21.128 lat (msec) : 2=0.08%, 4=37.65%, 10=62.23% 00:11:21.128 cpu : usr=99.00%, sys=0.05%, ctx=5, majf=0, minf=627 00:11:21.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:21.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.128 issued rwts: total=30668,30721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.128 00:11:21.128 Run status group 0 (all jobs): 00:11:21.128 READ: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=120MiB (126MB), run=2001-2001msec 00:11:21.128 WRITE: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=120MiB (126MB), run=2001-2001msec 00:11:21.128 ----------------------------------------------------- 00:11:21.128 Suppressions used: 00:11:21.128 count bytes template 00:11:21.128 1 32 /usr/src/fio/parse.c 00:11:21.128 1 8 libtcmalloc_minimal.so 00:11:21.128 ----------------------------------------------------- 00:11:21.128 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:21.128 13:10:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:21.128 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:21.129 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:21.129 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:21.129 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:21.129 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:21.386 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:21.386 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:21.386 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:21.386 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:21.386 13:10:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:21.386 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:21.386 fio-3.35 00:11:21.386 Starting 1 thread 00:11:24.695 00:11:24.695 test: (groupid=0, jobs=1): err= 0: pid=82011: Mon Jul 15 13:10:21 2024 00:11:24.695 read: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(122MiB/2001msec) 00:11:24.695 slat (usec): min=4, max=420, avg= 6.80, stdev= 3.26 00:11:24.695 clat (usec): min=241, max=10668, avg=4092.80, stdev=736.59 00:11:24.695 lat (usec): min=247, max=10721, avg=4099.61, stdev=737.57 00:11:24.695 clat percentiles (usec): 00:11:24.695 | 1.00th=[ 2999], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:11:24.695 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3916], 00:11:24.695 | 70.00th=[ 4113], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5800], 00:11:24.695 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 9372], 00:11:24.695 | 99.99th=[10552] 00:11:24.695 bw ( KiB/s): min=57184, max=66160, per=97.44%, avg=60714.67, stdev=4784.52, samples=3 00:11:24.695 iops : min=14296, max=16540, avg=15178.67, stdev=1196.13, samples=3 00:11:24.695 write: IOPS=15.6k, BW=60.9MiB/s (63.8MB/s)(122MiB/2001msec); 0 zone resets 00:11:24.695 slat (usec): min=4, max=424, avg= 7.00, stdev= 4.31 00:11:24.695 clat (usec): min=263, max=10548, avg=4095.86, stdev=733.61 00:11:24.695 lat (usec): min=269, max=10560, avg=4102.86, stdev=734.59 00:11:24.695 clat percentiles (usec): 00:11:24.695 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:11:24.695 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3916], 00:11:24.695 | 70.00th=[ 4113], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5735], 00:11:24.695 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 8291], 99.95th=[ 9503], 00:11:24.695 | 99.99th=[10421] 00:11:24.695 bw ( KiB/s): min=56224, max=65608, per=96.81%, avg=60341.33, stdev=4796.41, samples=3 00:11:24.695 iops : min=14056, max=16402, avg=15085.33, stdev=1199.10, samples=3 00:11:24.695 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:24.695 lat (msec) : 2=0.12%, 4=66.76%, 10=33.05%, 20=0.03% 00:11:24.695 cpu : usr=98.35%, sys=0.30%, ctx=19, majf=0, minf=626 00:11:24.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.695 issued rwts: total=31170,31181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.695 00:11:24.695 Run status group 0 (all jobs): 00:11:24.695 READ: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:24.695 WRITE: bw=60.9MiB/s (63.8MB/s), 60.9MiB/s-60.9MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:24.953 ----------------------------------------------------- 00:11:24.953 Suppressions used: 00:11:24.953 count bytes template 00:11:24.953 1 32 /usr/src/fio/parse.c 00:11:24.953 1 8 libtcmalloc_minimal.so 00:11:24.953 ----------------------------------------------------- 00:11:24.953 00:11:24.953 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.953 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:24.953 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:24.953 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:25.211 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:25.211 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:25.469 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:25.469 13:10:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:25.469 13:10:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:25.469 13:10:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:25.469 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:25.469 fio-3.35 00:11:25.469 Starting 1 thread 00:11:28.797 00:11:28.797 test: (groupid=0, jobs=1): err= 0: pid=82068: Mon Jul 15 13:10:25 2024 00:11:28.797 read: IOPS=14.4k, BW=56.1MiB/s (58.9MB/s)(112MiB/2001msec) 00:11:28.797 slat (nsec): min=4652, max=72963, avg=7205.53, stdev=2444.55 00:11:28.797 clat (usec): min=331, max=10992, avg=4441.08, stdev=903.82 00:11:28.797 lat (usec): min=338, max=11002, avg=4448.29, stdev=904.97 00:11:28.797 clat percentiles (usec): 00:11:28.797 | 1.00th=[ 2966], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3818], 00:11:28.797 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:28.797 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5800], 95.00th=[ 6456], 00:11:28.797 | 99.00th=[ 7308], 99.50th=[ 7570], 99.90th=[ 7963], 99.95th=[ 8586], 00:11:28.797 | 99.99th=[ 8586] 00:11:28.797 bw ( KiB/s): min=56120, max=56471, per=97.85%, avg=56255.67, stdev=188.57, samples=3 00:11:28.797 iops : min=14030, max=14117, avg=14063.67, stdev=46.72, samples=3 00:11:28.797 write: IOPS=14.4k, BW=56.2MiB/s (59.0MB/s)(113MiB/2001msec); 0 zone resets 00:11:28.797 slat (usec): min=4, max=115, avg= 7.45, stdev= 2.47 00:11:28.797 clat (usec): min=297, max=10707, avg=4431.48, stdev=901.60 00:11:28.797 lat (usec): min=304, max=10718, avg=4438.93, stdev=902.72 00:11:28.797 clat percentiles (usec): 00:11:28.797 | 1.00th=[ 2966], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3720], 00:11:28.797 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:28.797 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5800], 95.00th=[ 6456], 00:11:28.797 | 99.00th=[ 7308], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8848], 00:11:28.797 | 99.99th=[10421] 00:11:28.797 bw ( KiB/s): min=55952, max=56774, per=97.65%, avg=56231.33, stdev=470.03, samples=3 00:11:28.797 iops : min=13988, max=14193, avg=14057.67, stdev=117.22, samples=3 00:11:28.797 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:11:28.797 lat (msec) : 2=0.10%, 4=21.89%, 10=77.95%, 20=0.01% 00:11:28.797 cpu : usr=98.75%, sys=0.20%, ctx=7, majf=0, minf=627 00:11:28.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:28.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.797 issued rwts: total=28760,28807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.797 00:11:28.797 Run status group 0 (all jobs): 00:11:28.797 READ: bw=56.1MiB/s (58.9MB/s), 56.1MiB/s-56.1MiB/s (58.9MB/s-58.9MB/s), io=112MiB (118MB), run=2001-2001msec 00:11:28.797 WRITE: bw=56.2MiB/s (59.0MB/s), 56.2MiB/s-56.2MiB/s (59.0MB/s-59.0MB/s), io=113MiB (118MB), run=2001-2001msec 00:11:29.056 ----------------------------------------------------- 00:11:29.056 Suppressions used: 00:11:29.056 count bytes template 00:11:29.056 1 32 /usr/src/fio/parse.c 00:11:29.056 1 8 libtcmalloc_minimal.so 00:11:29.056 ----------------------------------------------------- 00:11:29.056 00:11:29.056 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:29.056 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:29.056 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:29.056 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:29.314 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:29.314 13:10:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:29.573 13:10:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:29.573 13:10:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:29.573 13:10:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.573 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:29.573 fio-3.35 00:11:29.573 Starting 1 thread 00:11:33.760 00:11:33.760 test: (groupid=0, jobs=1): err= 0: pid=82129: Mon Jul 15 13:10:29 2024 00:11:33.760 read: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(130MiB/2001msec) 00:11:33.760 slat (nsec): min=4715, max=57756, avg=6347.56, stdev=2198.67 00:11:33.760 clat (usec): min=444, max=11226, avg=3809.21, stdev=896.12 00:11:33.760 lat (usec): min=451, max=11277, avg=3815.56, stdev=897.33 00:11:33.760 clat percentiles (usec): 00:11:33.760 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:11:33.760 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:11:33.760 | 70.00th=[ 3687], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 5735], 00:11:33.760 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 8094], 99.95th=[ 8848], 00:11:33.760 | 99.99th=[10945] 00:11:33.760 bw ( KiB/s): min=59960, max=73496, per=100.00%, avg=68848.00, stdev=7699.94, samples=3 00:11:33.760 iops : min=14990, max=18374, avg=17212.00, stdev=1924.98, samples=3 00:11:33.760 write: IOPS=16.7k, BW=65.3MiB/s (68.5MB/s)(131MiB/2001msec); 0 zone resets 00:11:33.760 slat (usec): min=4, max=116, avg= 6.56, stdev= 2.30 00:11:33.760 clat (usec): min=308, max=10983, avg=3827.64, stdev=912.47 00:11:33.760 lat (usec): min=315, max=10993, avg=3834.21, stdev=913.67 00:11:33.760 clat percentiles (usec): 00:11:33.760 | 1.00th=[ 2999], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:11:33.760 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:11:33.760 | 70.00th=[ 3687], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 5866], 00:11:33.760 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 8094], 99.95th=[ 9241], 00:11:33.760 | 99.99th=[10683] 00:11:33.760 bw ( KiB/s): min=60360, max=73200, per=100.00%, avg=68728.00, stdev=7252.62, samples=3 00:11:33.760 iops : min=15090, max=18300, avg=17182.00, stdev=1813.16, samples=3 00:11:33.760 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:33.760 lat (msec) : 2=0.06%, 4=75.78%, 10=24.10%, 20=0.03% 00:11:33.760 cpu : usr=98.10%, sys=0.95%, ctx=468, majf=0, minf=623 00:11:33.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:33.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.760 issued rwts: total=33402,33472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.760 00:11:33.760 Run status group 0 (all jobs): 00:11:33.760 READ: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=130MiB (137MB), run=2001-2001msec 00:11:33.760 WRITE: bw=65.3MiB/s (68.5MB/s), 65.3MiB/s-65.3MiB/s (68.5MB/s-68.5MB/s), io=131MiB (137MB), run=2001-2001msec 00:11:33.760 ----------------------------------------------------- 00:11:33.760 Suppressions used: 00:11:33.760 count bytes template 00:11:33.760 1 32 /usr/src/fio/parse.c 00:11:33.760 1 8 libtcmalloc_minimal.so 00:11:33.760 ----------------------------------------------------- 00:11:33.760 00:11:33.760 13:10:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:33.760 13:10:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:33.760 00:11:33.760 real 0m17.326s 00:11:33.760 user 0m13.875s 00:11:33.760 sys 0m2.417s 00:11:33.760 13:10:29 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:33.760 13:10:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:33.760 ************************************ 00:11:33.760 END TEST nvme_fio 00:11:33.760 ************************************ 00:11:33.760 00:11:33.760 real 1m28.015s 00:11:33.760 user 3m35.032s 00:11:33.760 sys 0m14.492s 00:11:33.760 13:10:30 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:33.760 13:10:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.760 ************************************ 00:11:33.760 END TEST nvme 00:11:33.760 ************************************ 00:11:33.760 13:10:30 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:33.760 13:10:30 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:33.760 13:10:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:33.760 13:10:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:33.760 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:33.760 ************************************ 00:11:33.760 START TEST nvme_scc 00:11:33.760 ************************************ 00:11:33.760 13:10:30 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:33.760 * Looking for test storage... 00:11:33.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.760 13:10:30 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.760 13:10:30 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.760 13:10:30 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.760 13:10:30 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.760 13:10:30 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.760 13:10:30 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.760 13:10:30 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.760 13:10:30 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:33.760 13:10:30 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:33.760 13:10:30 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:33.760 13:10:30 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.760 13:10:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:33.760 13:10:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:33.760 13:10:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:33.760 13:10:30 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.017 Waiting for block devices as requested 00:11:34.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.275 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.275 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.275 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.549 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:39.549 13:10:36 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:39.549 13:10:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:39.549 13:10:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:39.549 13:10:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.549 13:10:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:39.549 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:39.550 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.551 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:39.552 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.553 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.554 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.555 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:39.556 13:10:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:39.556 13:10:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:39.556 13:10:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.556 13:10:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:39.556 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.557 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:39.558 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.559 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:39.560 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.561 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:39.562 13:10:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:39.562 13:10:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:39.562 13:10:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.562 13:10:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:39.562 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.563 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:39.565 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.566 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.567 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.568 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.830 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:39.831 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.832 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.833 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:39.834 13:10:36 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:39.834 13:10:36 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:39.834 13:10:36 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.834 13:10:36 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:39.834 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:39.835 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:39.836 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:39.837 13:10:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:39.837 13:10:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:39.837 13:10:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:40.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.969 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.969 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.969 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.970 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.970 13:10:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:40.970 13:10:37 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:40.970 13:10:37 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:40.970 13:10:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:40.970 ************************************ 00:11:40.970 START TEST nvme_simple_copy 00:11:40.970 ************************************ 00:11:40.970 13:10:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:41.228 Initializing NVMe Controllers 00:11:41.228 Attaching to 0000:00:10.0 00:11:41.228 Controller supports SCC. Attached to 0000:00:10.0 00:11:41.228 Namespace ID: 1 size: 6GB 00:11:41.228 Initialization complete. 00:11:41.228 00:11:41.228 Controller QEMU NVMe Ctrl (12340 ) 00:11:41.228 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:41.228 Namespace Block Size:4096 00:11:41.228 Writing LBAs 0 to 63 with Random Data 00:11:41.228 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:41.228 LBAs matching Written Data: 64 00:11:41.228 00:11:41.228 real 0m0.296s 00:11:41.228 user 0m0.118s 00:11:41.228 sys 0m0.076s 00:11:41.228 13:10:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.228 ************************************ 00:11:41.228 13:10:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:41.228 END TEST nvme_simple_copy 00:11:41.228 ************************************ 00:11:41.228 00:11:41.228 real 0m7.874s 00:11:41.228 user 0m1.200s 00:11:41.228 sys 0m1.611s 00:11:41.228 13:10:37 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.228 13:10:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:41.228 ************************************ 00:11:41.228 END TEST nvme_scc 00:11:41.228 ************************************ 00:11:41.486 13:10:37 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:41.486 13:10:37 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:41.486 13:10:37 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:41.486 13:10:37 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:41.486 13:10:37 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:41.486 13:10:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.486 13:10:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.486 13:10:37 -- common/autotest_common.sh@10 -- # set +x 00:11:41.486 ************************************ 00:11:41.486 START TEST nvme_fdp 00:11:41.486 ************************************ 00:11:41.486 13:10:37 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:11:41.486 * Looking for test storage... 00:11:41.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.486 13:10:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.486 13:10:38 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.486 13:10:38 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.486 13:10:38 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.486 13:10:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.486 13:10:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.486 13:10:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.486 13:10:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:41.486 13:10:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:41.486 13:10:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:41.486 13:10:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.486 13:10:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:41.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:42.001 Waiting for block devices as requested 00:11:42.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.001 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.259 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.259 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.553 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:47.553 13:10:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:47.553 13:10:43 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.553 13:10:43 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:47.553 13:10:43 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.553 13:10:43 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.553 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.554 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:47.555 13:10:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:47.555 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.556 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:47.557 13:10:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.557 13:10:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:47.557 13:10:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.557 13:10:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.557 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.558 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:47.559 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:47.560 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:47.561 13:10:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.561 13:10:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:47.561 13:10:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.561 13:10:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.561 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.562 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.563 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.564 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.565 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:47.566 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.567 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.568 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:47.569 13:10:44 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.569 13:10:44 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:47.569 13:10:44 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.569 13:10:44 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.569 13:10:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:47.829 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.829 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:47.831 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:47.832 13:10:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:47.832 13:10:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:47.833 13:10:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:47.833 13:10:44 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:47.833 13:10:44 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:48.657 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.657 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.657 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.915 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.915 13:10:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:48.915 13:10:45 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:48.915 13:10:45 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.915 13:10:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:48.915 ************************************ 00:11:48.915 START TEST nvme_flexible_data_placement 00:11:48.915 ************************************ 00:11:48.915 13:10:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:49.173 Initializing NVMe Controllers 00:11:49.173 Attaching to 0000:00:13.0 00:11:49.173 Controller supports FDP Attached to 0000:00:13.0 00:11:49.173 Namespace ID: 1 Endurance Group ID: 1 00:11:49.173 Initialization complete. 00:11:49.173 00:11:49.173 ================================== 00:11:49.173 == FDP tests for Namespace: #01 == 00:11:49.173 ================================== 00:11:49.173 00:11:49.173 Get Feature: FDP: 00:11:49.173 ================= 00:11:49.173 Enabled: Yes 00:11:49.173 FDP configuration Index: 0 00:11:49.173 00:11:49.173 FDP configurations log page 00:11:49.173 =========================== 00:11:49.173 Number of FDP configurations: 1 00:11:49.173 Version: 0 00:11:49.173 Size: 112 00:11:49.173 FDP Configuration Descriptor: 0 00:11:49.173 Descriptor Size: 96 00:11:49.173 Reclaim Group Identifier format: 2 00:11:49.173 FDP Volatile Write Cache: Not Present 00:11:49.173 FDP Configuration: Valid 00:11:49.173 Vendor Specific Size: 0 00:11:49.173 Number of Reclaim Groups: 2 00:11:49.173 Number of Recalim Unit Handles: 8 00:11:49.173 Max Placement Identifiers: 128 00:11:49.173 Number of Namespaces Suppprted: 256 00:11:49.173 Reclaim unit Nominal Size: 6000000 bytes 00:11:49.173 Estimated Reclaim Unit Time Limit: Not Reported 00:11:49.173 RUH Desc #000: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #001: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #002: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #003: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #004: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #005: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #006: RUH Type: Initially Isolated 00:11:49.173 RUH Desc #007: RUH Type: Initially Isolated 00:11:49.173 00:11:49.173 FDP reclaim unit handle usage log page 00:11:49.173 ====================================== 00:11:49.173 Number of Reclaim Unit Handles: 8 00:11:49.173 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:49.173 RUH Usage Desc #001: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #002: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #003: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #004: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #005: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #006: RUH Attributes: Unused 00:11:49.173 RUH Usage Desc #007: RUH Attributes: Unused 00:11:49.173 00:11:49.173 FDP statistics log page 00:11:49.173 ======================= 00:11:49.173 Host bytes with metadata written: 1471655936 00:11:49.173 Media bytes with metadata written: 1471881216 00:11:49.173 Media bytes erased: 0 00:11:49.173 00:11:49.173 FDP Reclaim unit handle status 00:11:49.173 ============================== 00:11:49.173 Number of RUHS descriptors: 2 00:11:49.173 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002485 00:11:49.174 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:49.174 00:11:49.174 FDP write on placement id: 0 success 00:11:49.174 00:11:49.174 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:49.174 00:11:49.174 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:49.174 00:11:49.174 Get Feature: FDP Events for Placement handle: #0 00:11:49.174 ======================== 00:11:49.174 Number of FDP Events: 6 00:11:49.174 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:49.174 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:49.174 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:49.174 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:49.174 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:49.174 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:49.174 00:11:49.174 FDP events log page 00:11:49.174 =================== 00:11:49.174 Number of FDP events: 1 00:11:49.174 FDP Event #0: 00:11:49.174 Event Type: RU Not Written to Capacity 00:11:49.174 Placement Identifier: Valid 00:11:49.174 NSID: Valid 00:11:49.174 Location: Valid 00:11:49.174 Placement Identifier: 0 00:11:49.174 Event Timestamp: 4 00:11:49.174 Namespace Identifier: 1 00:11:49.174 Reclaim Group Identifier: 0 00:11:49.174 Reclaim Unit Handle Identifier: 0 00:11:49.174 00:11:49.174 FDP test passed 00:11:49.174 00:11:49.174 real 0m0.246s 00:11:49.174 user 0m0.070s 00:11:49.174 sys 0m0.074s 00:11:49.174 13:10:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.174 13:10:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:49.174 ************************************ 00:11:49.174 END TEST nvme_flexible_data_placement 00:11:49.174 ************************************ 00:11:49.174 00:11:49.174 real 0m7.856s 00:11:49.174 user 0m1.227s 00:11:49.174 sys 0m1.626s 00:11:49.174 13:10:45 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.174 ************************************ 00:11:49.174 END TEST nvme_fdp 00:11:49.174 13:10:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:49.174 ************************************ 00:11:49.174 13:10:45 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:49.174 13:10:45 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:49.174 13:10:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:49.174 13:10:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:49.174 13:10:45 -- common/autotest_common.sh@10 -- # set +x 00:11:49.174 ************************************ 00:11:49.174 START TEST nvme_rpc 00:11:49.174 ************************************ 00:11:49.174 13:10:45 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:49.433 * Looking for test storage... 00:11:49.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:49.433 13:10:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.433 13:10:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:49.433 13:10:45 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:49.433 13:10:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:49.433 13:10:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83459 00:11:49.433 13:10:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:49.433 13:10:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:49.433 13:10:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83459 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 83459 ']' 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:49.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:49.433 13:10:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.433 [2024-07-15 13:10:46.155453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:49.433 [2024-07-15 13:10:46.155678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83459 ] 00:11:49.691 [2024-07-15 13:10:46.307455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:49.691 [2024-07-15 13:10:46.410981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.691 [2024-07-15 13:10:46.411003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.625 13:10:47 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:50.625 13:10:47 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:50.625 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:50.884 Nvme0n1 00:11:50.884 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:50.884 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:50.884 request: 00:11:50.884 { 00:11:50.884 "filename": "non_existing_file", 00:11:50.884 "bdev_name": "Nvme0n1", 00:11:50.884 "method": "bdev_nvme_apply_firmware", 00:11:50.884 "req_id": 1 00:11:50.884 } 00:11:50.884 Got JSON-RPC error response 00:11:50.884 response: 00:11:50.884 { 00:11:50.884 "code": -32603, 00:11:50.884 "message": "open file failed." 00:11:50.884 } 00:11:50.884 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:50.884 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:50.884 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:51.143 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:51.143 13:10:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83459 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 83459 ']' 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 83459 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83459 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83459' 00:11:51.143 killing process with pid 83459 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@965 -- # kill 83459 00:11:51.143 13:10:47 nvme_rpc -- common/autotest_common.sh@970 -- # wait 83459 00:11:51.711 00:11:51.711 real 0m2.442s 00:11:51.711 user 0m4.735s 00:11:51.711 sys 0m0.664s 00:11:51.711 13:10:48 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:51.711 13:10:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.711 ************************************ 00:11:51.711 END TEST nvme_rpc 00:11:51.711 ************************************ 00:11:51.711 13:10:48 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:51.711 13:10:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:51.711 13:10:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.711 13:10:48 -- common/autotest_common.sh@10 -- # set +x 00:11:51.711 ************************************ 00:11:51.711 START TEST nvme_rpc_timeouts 00:11:51.711 ************************************ 00:11:51.711 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:51.970 * Looking for test storage... 00:11:51.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83522 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83522 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83547 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:51.970 13:10:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83547 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 83547 ']' 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:51.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:51.970 13:10:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 [2024-07-15 13:10:48.573409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:51.970 [2024-07-15 13:10:48.573628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83547 ] 00:11:52.234 [2024-07-15 13:10:48.727159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:52.234 [2024-07-15 13:10:48.841747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.234 [2024-07-15 13:10:48.841824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.799 13:10:49 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.799 13:10:49 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:11:52.799 Checking default timeout settings: 00:11:52.799 13:10:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:52.799 13:10:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:53.363 Making settings changes with rpc: 00:11:53.363 13:10:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:53.363 13:10:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:53.363 Check default vs. modified settings: 00:11:53.364 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:53.364 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:53.928 Setting action_on_timeout is changed as expected. 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:53.928 Setting timeout_us is changed as expected. 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83522 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:53.928 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:53.929 Setting timeout_admin_us is changed as expected. 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83522 /tmp/settings_modified_83522 00:11:53.929 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83547 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 83547 ']' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 83547 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83547 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:53.929 killing process with pid 83547 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83547' 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 83547 00:11:53.929 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 83547 00:11:54.492 RPC TIMEOUT SETTING TEST PASSED. 00:11:54.492 13:10:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:54.492 00:11:54.492 real 0m2.579s 00:11:54.492 user 0m5.110s 00:11:54.492 sys 0m0.640s 00:11:54.492 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.492 13:10:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:54.492 ************************************ 00:11:54.492 END TEST nvme_rpc_timeouts 00:11:54.492 ************************************ 00:11:54.492 13:10:50 -- spdk/autotest.sh@243 -- # uname -s 00:11:54.492 13:10:51 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:54.492 13:10:51 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:54.492 13:10:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:54.492 13:10:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.492 13:10:51 -- common/autotest_common.sh@10 -- # set +x 00:11:54.492 ************************************ 00:11:54.492 START TEST sw_hotplug 00:11:54.492 ************************************ 00:11:54.492 13:10:51 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:54.492 * Looking for test storage... 00:11:54.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:54.492 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:54.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.007 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.007 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.007 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.007 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:55.007 13:10:51 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:55.007 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:11:55.007 13:10:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=83885 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:11:55.265 13:10:51 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:55.265 13:10:51 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:55.265 13:10:51 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:55.265 13:10:51 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:55.265 13:10:51 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:11:55.265 Initializing NVMe Controllers 00:11:55.265 Attaching to 0000:00:10.0 00:11:55.265 Attaching to 0000:00:11.0 00:11:55.265 Attaching to 0000:00:12.0 00:11:55.265 Attaching to 0000:00:13.0 00:11:55.265 Attached to 0000:00:10.0 00:11:55.265 Attached to 0000:00:11.0 00:11:55.265 Attached to 0000:00:13.0 00:11:55.265 Attached to 0000:00:12.0 00:11:55.265 Initialization complete. Starting I/O... 00:11:55.523 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:55.523 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:55.523 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:11:55.523 QEMU NVMe Ctrl (12342 ): 4 I/Os completed (+4) 00:11:55.523 00:11:56.456 QEMU NVMe Ctrl (12340 ): 1047 I/Os completed (+1047) 00:11:56.456 QEMU NVMe Ctrl (12341 ): 1176 I/Os completed (+1176) 00:11:56.456 QEMU NVMe Ctrl (12343 ): 1157 I/Os completed (+1157) 00:11:56.456 QEMU NVMe Ctrl (12342 ): 1124 I/Os completed (+1120) 00:11:56.456 00:11:57.406 QEMU NVMe Ctrl (12340 ): 2231 I/Os completed (+1184) 00:11:57.406 QEMU NVMe Ctrl (12341 ): 2384 I/Os completed (+1208) 00:11:57.406 QEMU NVMe Ctrl (12343 ): 2351 I/Os completed (+1194) 00:11:57.406 QEMU NVMe Ctrl (12342 ): 2376 I/Os completed (+1252) 00:11:57.406 00:11:58.340 QEMU NVMe Ctrl (12340 ): 4017 I/Os completed (+1786) 00:11:58.340 QEMU NVMe Ctrl (12341 ): 4327 I/Os completed (+1943) 00:11:58.340 QEMU NVMe Ctrl (12343 ): 4211 I/Os completed (+1860) 00:11:58.340 QEMU NVMe Ctrl (12342 ): 4275 I/Os completed (+1899) 00:11:58.340 00:11:59.275 QEMU NVMe Ctrl (12340 ): 5735 I/Os completed (+1718) 00:11:59.275 QEMU NVMe Ctrl (12341 ): 6165 I/Os completed (+1838) 00:11:59.275 QEMU NVMe Ctrl (12343 ): 5986 I/Os completed (+1775) 00:11:59.275 QEMU NVMe Ctrl (12342 ): 6079 I/Os completed (+1804) 00:11:59.275 00:12:00.649 QEMU NVMe Ctrl (12340 ): 7607 I/Os completed (+1872) 00:12:00.649 QEMU NVMe Ctrl (12341 ): 8045 I/Os completed (+1880) 00:12:00.649 QEMU NVMe Ctrl (12343 ): 7892 I/Os completed (+1906) 00:12:00.649 QEMU NVMe Ctrl (12342 ): 7979 I/Os completed (+1900) 00:12:00.649 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:01.215 [2024-07-15 13:10:57.791552] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:01.215 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:01.215 [2024-07-15 13:10:57.793561] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.793711] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.793834] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.793943] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:01.215 [2024-07-15 13:10:57.796643] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.796716] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.796742] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.796765] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:01.215 [2024-07-15 13:10:57.829136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:01.215 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:01.215 [2024-07-15 13:10:57.831381] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.831454] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.831484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.831505] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:01.215 [2024-07-15 13:10:57.833511] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.833560] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.833590] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 [2024-07-15 13:10:57.833610] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:01.215 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:01.215 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:01.215 EAL: Scan for (pci) bus failed. 00:12:01.473 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:01.473 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:01.473 13:10:57 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:01.473 QEMU NVMe Ctrl (12343 ): 9831 I/Os completed (+1939) 00:12:01.473 QEMU NVMe Ctrl (12342 ): 9927 I/Os completed (+1948) 00:12:01.473 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:01.473 Attaching to 0000:00:10.0 00:12:01.473 Attached to 0000:00:10.0 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:01.473 13:10:58 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:01.473 Attaching to 0000:00:11.0 00:12:01.473 Attached to 0000:00:11.0 00:12:02.406 QEMU NVMe Ctrl (12343 ): 11785 I/Os completed (+1954) 00:12:02.406 QEMU NVMe Ctrl (12342 ): 11917 I/Os completed (+1990) 00:12:02.406 QEMU NVMe Ctrl (12340 ): 1838 I/Os completed (+1838) 00:12:02.407 QEMU NVMe Ctrl (12341 ): 1673 I/Os completed (+1673) 00:12:02.407 00:12:03.339 QEMU NVMe Ctrl (12343 ): 13548 I/Os completed (+1763) 00:12:03.339 QEMU NVMe Ctrl (12342 ): 13770 I/Os completed (+1853) 00:12:03.339 QEMU NVMe Ctrl (12340 ): 3669 I/Os completed (+1831) 00:12:03.339 QEMU NVMe Ctrl (12341 ): 3485 I/Os completed (+1812) 00:12:03.339 00:12:04.273 QEMU NVMe Ctrl (12343 ): 15256 I/Os completed (+1708) 00:12:04.273 QEMU NVMe Ctrl (12342 ): 15650 I/Os completed (+1880) 00:12:04.273 QEMU NVMe Ctrl (12340 ): 5433 I/Os completed (+1764) 00:12:04.273 QEMU NVMe Ctrl (12341 ): 5240 I/Os completed (+1755) 00:12:04.273 00:12:05.647 QEMU NVMe Ctrl (12343 ): 17104 I/Os completed (+1848) 00:12:05.647 QEMU NVMe Ctrl (12342 ): 17538 I/Os completed (+1888) 00:12:05.647 QEMU NVMe Ctrl (12340 ): 7323 I/Os completed (+1890) 00:12:05.647 QEMU NVMe Ctrl (12341 ): 7117 I/Os completed (+1877) 00:12:05.647 00:12:06.580 QEMU NVMe Ctrl (12343 ): 19004 I/Os completed (+1900) 00:12:06.580 QEMU NVMe Ctrl (12342 ): 19520 I/Os completed (+1982) 00:12:06.580 QEMU NVMe Ctrl (12340 ): 9266 I/Os completed (+1943) 00:12:06.580 QEMU NVMe Ctrl (12341 ): 9061 I/Os completed (+1944) 00:12:06.580 00:12:07.543 QEMU NVMe Ctrl (12343 ): 20623 I/Os completed (+1619) 00:12:07.543 QEMU NVMe Ctrl (12342 ): 21207 I/Os completed (+1687) 00:12:07.543 QEMU NVMe Ctrl (12340 ): 10947 I/Os completed (+1681) 00:12:07.543 QEMU NVMe Ctrl (12341 ): 10750 I/Os completed (+1689) 00:12:07.543 00:12:08.476 QEMU NVMe Ctrl (12343 ): 22410 I/Os completed (+1787) 00:12:08.476 QEMU NVMe Ctrl (12342 ): 23038 I/Os completed (+1831) 00:12:08.476 QEMU NVMe Ctrl (12340 ): 12826 I/Os completed (+1879) 00:12:08.476 QEMU NVMe Ctrl (12341 ): 12612 I/Os completed (+1862) 00:12:08.476 00:12:09.456 QEMU NVMe Ctrl (12343 ): 24222 I/Os completed (+1812) 00:12:09.456 QEMU NVMe Ctrl (12342 ): 25009 I/Os completed (+1971) 00:12:09.456 QEMU NVMe Ctrl (12340 ): 14765 I/Os completed (+1939) 00:12:09.456 QEMU NVMe Ctrl (12341 ): 14536 I/Os completed (+1924) 00:12:09.456 00:12:10.388 QEMU NVMe Ctrl (12343 ): 25943 I/Os completed (+1721) 00:12:10.388 QEMU NVMe Ctrl (12342 ): 26877 I/Os completed (+1868) 00:12:10.388 QEMU NVMe Ctrl (12340 ): 16567 I/Os completed (+1802) 00:12:10.388 QEMU NVMe Ctrl (12341 ): 16357 I/Os completed (+1821) 00:12:10.388 00:12:11.318 QEMU NVMe Ctrl (12343 ): 27797 I/Os completed (+1854) 00:12:11.318 QEMU NVMe Ctrl (12342 ): 28791 I/Os completed (+1914) 00:12:11.319 QEMU NVMe Ctrl (12340 ): 18473 I/Os completed (+1906) 00:12:11.319 QEMU NVMe Ctrl (12341 ): 18254 I/Os completed (+1897) 00:12:11.319 00:12:12.694 QEMU NVMe Ctrl (12343 ): 29502 I/Os completed (+1705) 00:12:12.694 QEMU NVMe Ctrl (12342 ): 30629 I/Os completed (+1838) 00:12:12.694 QEMU NVMe Ctrl (12340 ): 20217 I/Os completed (+1744) 00:12:12.694 QEMU NVMe Ctrl (12341 ): 20009 I/Os completed (+1755) 00:12:12.694 00:12:13.272 QEMU NVMe Ctrl (12343 ): 31253 I/Os completed (+1751) 00:12:13.272 QEMU NVMe Ctrl (12342 ): 32465 I/Os completed (+1836) 00:12:13.272 QEMU NVMe Ctrl (12340 ): 22050 I/Os completed (+1833) 00:12:13.272 QEMU NVMe Ctrl (12341 ): 21831 I/Os completed (+1822) 00:12:13.272 00:12:13.540 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:13.540 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:13.540 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:13.540 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:13.540 [2024-07-15 13:11:10.149221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:13.540 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:13.540 [2024-07-15 13:11:10.151923] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.540 [2024-07-15 13:11:10.152235] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.540 [2024-07-15 13:11:10.152429] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.540 [2024-07-15 13:11:10.152482] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.540 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:13.541 [2024-07-15 13:11:10.154813] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.154875] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.154904] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.154938] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:12:13.541 EAL: Scan for (pci) bus failed. 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:13.541 [2024-07-15 13:11:10.171618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:13.541 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:13.541 [2024-07-15 13:11:10.173928] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.174158] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.174360] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.174540] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:13.541 [2024-07-15 13:11:10.176870] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.177062] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.177108] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 [2024-07-15 13:11:10.177166] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:13.541 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:13.799 Attaching to 0000:00:10.0 00:12:13.799 Attached to 0000:00:10.0 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:13.799 13:11:10 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:13.799 Attaching to 0000:00:11.0 00:12:13.799 Attached to 0000:00:11.0 00:12:14.365 QEMU NVMe Ctrl (12343 ): 33047 I/Os completed (+1794) 00:12:14.365 QEMU NVMe Ctrl (12342 ): 34353 I/Os completed (+1888) 00:12:14.365 QEMU NVMe Ctrl (12340 ): 1180 I/Os completed (+1180) 00:12:14.365 QEMU NVMe Ctrl (12341 ): 1030 I/Os completed (+1030) 00:12:14.365 00:12:15.300 QEMU NVMe Ctrl (12343 ): 34807 I/Os completed (+1760) 00:12:15.300 QEMU NVMe Ctrl (12342 ): 36233 I/Os completed (+1880) 00:12:15.300 QEMU NVMe Ctrl (12340 ): 3019 I/Os completed (+1839) 00:12:15.300 QEMU NVMe Ctrl (12341 ): 2859 I/Os completed (+1829) 00:12:15.300 00:12:16.674 QEMU NVMe Ctrl (12343 ): 36479 I/Os completed (+1672) 00:12:16.674 QEMU NVMe Ctrl (12342 ): 38008 I/Os completed (+1775) 00:12:16.674 QEMU NVMe Ctrl (12340 ): 4732 I/Os completed (+1713) 00:12:16.674 QEMU NVMe Ctrl (12341 ): 4610 I/Os completed (+1751) 00:12:16.674 00:12:17.604 QEMU NVMe Ctrl (12343 ): 38035 I/Os completed (+1556) 00:12:17.605 QEMU NVMe Ctrl (12342 ): 39738 I/Os completed (+1730) 00:12:17.605 QEMU NVMe Ctrl (12340 ): 6449 I/Os completed (+1717) 00:12:17.605 QEMU NVMe Ctrl (12341 ): 6277 I/Os completed (+1667) 00:12:17.605 00:12:18.537 QEMU NVMe Ctrl (12343 ): 39665 I/Os completed (+1630) 00:12:18.537 QEMU NVMe Ctrl (12342 ): 41547 I/Os completed (+1809) 00:12:18.537 QEMU NVMe Ctrl (12340 ): 8180 I/Os completed (+1731) 00:12:18.537 QEMU NVMe Ctrl (12341 ): 7977 I/Os completed (+1700) 00:12:18.537 00:12:19.470 QEMU NVMe Ctrl (12343 ): 41244 I/Os completed (+1579) 00:12:19.470 QEMU NVMe Ctrl (12342 ): 43252 I/Os completed (+1705) 00:12:19.470 QEMU NVMe Ctrl (12340 ): 9818 I/Os completed (+1638) 00:12:19.470 QEMU NVMe Ctrl (12341 ): 9628 I/Os completed (+1651) 00:12:19.470 00:12:20.403 QEMU NVMe Ctrl (12343 ): 43008 I/Os completed (+1764) 00:12:20.403 QEMU NVMe Ctrl (12342 ): 45113 I/Os completed (+1861) 00:12:20.403 QEMU NVMe Ctrl (12340 ): 11688 I/Os completed (+1870) 00:12:20.403 QEMU NVMe Ctrl (12341 ): 11403 I/Os completed (+1775) 00:12:20.403 00:12:21.335 QEMU NVMe Ctrl (12343 ): 44712 I/Os completed (+1704) 00:12:21.335 QEMU NVMe Ctrl (12342 ): 46927 I/Os completed (+1814) 00:12:21.335 QEMU NVMe Ctrl (12340 ): 13485 I/Os completed (+1797) 00:12:21.335 QEMU NVMe Ctrl (12341 ): 13188 I/Os completed (+1785) 00:12:21.335 00:12:22.268 QEMU NVMe Ctrl (12343 ): 46426 I/Os completed (+1714) 00:12:22.268 QEMU NVMe Ctrl (12342 ): 48725 I/Os completed (+1798) 00:12:22.268 QEMU NVMe Ctrl (12340 ): 15252 I/Os completed (+1767) 00:12:22.268 QEMU NVMe Ctrl (12341 ): 14948 I/Os completed (+1760) 00:12:22.268 00:12:23.639 QEMU NVMe Ctrl (12343 ): 48112 I/Os completed (+1686) 00:12:23.639 QEMU NVMe Ctrl (12342 ): 50497 I/Os completed (+1772) 00:12:23.639 QEMU NVMe Ctrl (12340 ): 17007 I/Os completed (+1755) 00:12:23.639 QEMU NVMe Ctrl (12341 ): 16734 I/Os completed (+1786) 00:12:23.639 00:12:24.573 QEMU NVMe Ctrl (12343 ): 49685 I/Os completed (+1573) 00:12:24.573 QEMU NVMe Ctrl (12342 ): 52144 I/Os completed (+1647) 00:12:24.573 QEMU NVMe Ctrl (12340 ): 18644 I/Os completed (+1637) 00:12:24.573 QEMU NVMe Ctrl (12341 ): 18373 I/Os completed (+1639) 00:12:24.573 00:12:25.509 QEMU NVMe Ctrl (12343 ): 51524 I/Os completed (+1839) 00:12:25.509 QEMU NVMe Ctrl (12342 ): 54023 I/Os completed (+1879) 00:12:25.509 QEMU NVMe Ctrl (12340 ): 20540 I/Os completed (+1896) 00:12:25.509 QEMU NVMe Ctrl (12341 ): 20245 I/Os completed (+1872) 00:12:25.509 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:25.767 [2024-07-15 13:11:22.455005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:25.767 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:25.767 [2024-07-15 13:11:22.456988] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.457059] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.457086] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.457119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:25.767 [2024-07-15 13:11:22.459074] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.459128] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.459169] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.459195] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:25.767 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:25.767 [2024-07-15 13:11:22.488866] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:25.767 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:25.767 [2024-07-15 13:11:22.490894] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.491087] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.491179] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.491215] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:25.767 [2024-07-15 13:11:22.493617] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.767 [2024-07-15 13:11:22.493803] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.768 [2024-07-15 13:11:22.493879] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.768 [2024-07-15 13:11:22.494056] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.768 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:25.768 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:26.026 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:26.026 Attaching to 0000:00:10.0 00:12:26.026 Attached to 0000:00:10.0 00:12:26.285 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:26.285 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:26.285 13:11:22 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:26.285 Attaching to 0000:00:11.0 00:12:26.285 Attached to 0000:00:11.0 00:12:26.285 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:12:26.285 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:12:26.285 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:26.285 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:26.285 [2024-07-15 13:11:22.798519] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:38.482 13:11:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:38.482 13:11:34 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:38.482 13:11:34 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.00 00:12:38.482 13:11:34 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.00 00:12:38.482 13:11:34 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.00 00:12:38.482 13:11:34 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:12:38.482 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 13:11:34 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 83885 00:12:45.059 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (83885) - No such process 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 83885 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:12:45.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=84427 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:45.059 13:11:40 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 84427 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 84427 ']' 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.059 13:11:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.059 [2024-07-15 13:11:40.912788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:45.059 [2024-07-15 13:11:40.912979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84427 ] 00:12:45.059 [2024-07-15 13:11:41.062445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.060 [2024-07-15 13:11:41.165606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:12:45.318 13:11:41 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:45.318 13:11:41 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 Nvme00n1 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.318 13:11:41 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 [ 00:12:45.318 { 00:12:45.318 "name": "Nvme00n1", 00:12:45.318 "aliases": [ 00:12:45.318 "6706ff82-89c1-49f0-9abf-68297697014e" 00:12:45.318 ], 00:12:45.318 "product_name": "NVMe disk", 00:12:45.318 "block_size": 4096, 00:12:45.318 "num_blocks": 1548666, 00:12:45.318 "uuid": "6706ff82-89c1-49f0-9abf-68297697014e", 00:12:45.318 "md_size": 64, 00:12:45.318 "md_interleave": false, 00:12:45.318 "dif_type": 0, 00:12:45.318 "assigned_rate_limits": { 00:12:45.318 "rw_ios_per_sec": 0, 00:12:45.318 "rw_mbytes_per_sec": 0, 00:12:45.318 "r_mbytes_per_sec": 0, 00:12:45.318 "w_mbytes_per_sec": 0 00:12:45.318 }, 00:12:45.318 "claimed": false, 00:12:45.318 "zoned": false, 00:12:45.318 "supported_io_types": { 00:12:45.318 "read": true, 00:12:45.318 "write": true, 00:12:45.318 "unmap": true, 00:12:45.318 "write_zeroes": true, 00:12:45.318 "flush": true, 00:12:45.318 "reset": true, 00:12:45.318 "compare": true, 00:12:45.318 "compare_and_write": false, 00:12:45.318 "abort": true, 00:12:45.318 "nvme_admin": true, 00:12:45.318 "nvme_io": true 00:12:45.318 }, 00:12:45.318 "driver_specific": { 00:12:45.318 "nvme": [ 00:12:45.318 { 00:12:45.318 "pci_address": "0000:00:10.0", 00:12:45.318 "trid": { 00:12:45.318 "trtype": "PCIe", 00:12:45.318 "traddr": "0000:00:10.0" 00:12:45.318 }, 00:12:45.318 "ctrlr_data": { 00:12:45.318 "cntlid": 0, 00:12:45.318 "vendor_id": "0x1b36", 00:12:45.318 "model_number": "QEMU NVMe Ctrl", 00:12:45.318 "serial_number": "12340", 00:12:45.318 "firmware_revision": "8.0.0", 00:12:45.318 "subnqn": "nqn.2019-08.org.qemu:12340", 00:12:45.318 "oacs": { 00:12:45.318 "security": 0, 00:12:45.318 "format": 1, 00:12:45.318 "firmware": 0, 00:12:45.318 "ns_manage": 1 00:12:45.318 }, 00:12:45.318 "multi_ctrlr": false, 00:12:45.318 "ana_reporting": false 00:12:45.318 }, 00:12:45.318 "vs": { 00:12:45.318 "nvme_version": "1.4" 00:12:45.318 }, 00:12:45.318 "ns_data": { 00:12:45.318 "id": 1, 00:12:45.318 "can_share": false 00:12:45.318 } 00:12:45.318 } 00:12:45.318 ], 00:12:45.318 "mp_policy": "active_passive" 00:12:45.318 } 00:12:45.318 } 00:12:45.318 ] 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:45.318 13:11:41 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:45.318 13:11:41 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 Nvme01n1 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.318 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.318 13:11:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.318 [ 00:12:45.318 { 00:12:45.318 "name": "Nvme01n1", 00:12:45.318 "aliases": [ 00:12:45.318 "6db6148b-0611-4c21-a581-c2ad8d8bd064" 00:12:45.318 ], 00:12:45.318 "product_name": "NVMe disk", 00:12:45.318 "block_size": 4096, 00:12:45.318 "num_blocks": 1310720, 00:12:45.318 "uuid": "6db6148b-0611-4c21-a581-c2ad8d8bd064", 00:12:45.318 "assigned_rate_limits": { 00:12:45.318 "rw_ios_per_sec": 0, 00:12:45.318 "rw_mbytes_per_sec": 0, 00:12:45.318 "r_mbytes_per_sec": 0, 00:12:45.318 "w_mbytes_per_sec": 0 00:12:45.318 }, 00:12:45.318 "claimed": false, 00:12:45.318 "zoned": false, 00:12:45.318 "supported_io_types": { 00:12:45.318 "read": true, 00:12:45.318 "write": true, 00:12:45.318 "unmap": true, 00:12:45.318 "write_zeroes": true, 00:12:45.318 "flush": true, 00:12:45.318 "reset": true, 00:12:45.318 "compare": true, 00:12:45.318 "compare_and_write": false, 00:12:45.318 "abort": true, 00:12:45.318 "nvme_admin": true, 00:12:45.318 "nvme_io": true 00:12:45.318 }, 00:12:45.318 "driver_specific": { 00:12:45.318 "nvme": [ 00:12:45.318 { 00:12:45.318 "pci_address": "0000:00:11.0", 00:12:45.318 "trid": { 00:12:45.318 "trtype": "PCIe", 00:12:45.318 "traddr": "0000:00:11.0" 00:12:45.318 }, 00:12:45.318 "ctrlr_data": { 00:12:45.318 "cntlid": 0, 00:12:45.318 "vendor_id": "0x1b36", 00:12:45.318 "model_number": "QEMU NVMe Ctrl", 00:12:45.318 "serial_number": "12341", 00:12:45.318 "firmware_revision": "8.0.0", 00:12:45.318 "subnqn": "nqn.2019-08.org.qemu:12341", 00:12:45.318 "oacs": { 00:12:45.318 "security": 0, 00:12:45.318 "format": 1, 00:12:45.318 "firmware": 0, 00:12:45.318 "ns_manage": 1 00:12:45.318 }, 00:12:45.318 "multi_ctrlr": false, 00:12:45.318 "ana_reporting": false 00:12:45.318 }, 00:12:45.318 "vs": { 00:12:45.318 "nvme_version": "1.4" 00:12:45.319 }, 00:12:45.319 "ns_data": { 00:12:45.319 "id": 1, 00:12:45.319 "can_share": false 00:12:45.319 } 00:12:45.319 } 00:12:45.319 ], 00:12:45.319 "mp_policy": "active_passive" 00:12:45.319 } 00:12:45.319 } 00:12:45.319 ] 00:12:45.319 13:11:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.319 13:11:42 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:45.319 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:45.319 13:11:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.319 13:11:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.576 13:11:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.576 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:12:45.576 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:45.576 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:45.577 13:11:42 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:45.577 13:11:42 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:45.577 13:11:42 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:45.577 13:11:42 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:45.577 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:45.577 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:45.577 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:45.577 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:45.577 13:11:42 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:52.180 13:11:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:52.180 [2024-07-15 13:11:48.152275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:52.180 [2024-07-15 13:11:48.154960] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.155046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.155076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.155129] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.155169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.155191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.155207] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.155227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.155241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.155258] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.155271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.155288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.652237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:52.180 [2024-07-15 13:11:48.654837] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.654890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.654918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.654941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.654959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.654992] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.655007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.655026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.180 [2024-07-15 13:11:48.655040] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.180 [2024-07-15 13:11:48.655059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.180 [2024-07-15 13:11:48.655073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.439 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:57.439 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:57.439 13:11:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.439 13:11:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.439 13:11:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.11 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.11 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.11 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.11 2 00:12:57.697 remove_attach_helper took 12.11s to complete (handling 2 nvme drive(s)) 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:57.697 13:11:54 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:57.697 13:11:54 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:13:04.246 13:12:00 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:04.246 13:12:00 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:04.246 13:12:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.06 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.06 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.06 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.06 2 00:13:09.545 remove_attach_helper took 12.06s to complete (handling 2 nvme drive(s)) 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:13:09.545 13:12:06 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 84427 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 84427 ']' 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 84427 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:09.545 13:12:06 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84427 00:13:09.803 killing process with pid 84427 00:13:09.803 13:12:06 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:09.803 13:12:06 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:09.803 13:12:06 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84427' 00:13:09.803 13:12:06 sw_hotplug -- common/autotest_common.sh@965 -- # kill 84427 00:13:09.803 13:12:06 sw_hotplug -- common/autotest_common.sh@970 -- # wait 84427 00:13:10.061 00:13:10.061 real 1m15.747s 00:13:10.061 user 0m43.881s 00:13:10.061 sys 0m14.826s 00:13:10.061 13:12:06 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.061 13:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:10.061 ************************************ 00:13:10.061 END TEST sw_hotplug 00:13:10.061 ************************************ 00:13:10.319 13:12:06 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:10.319 13:12:06 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:10.319 13:12:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:10.319 13:12:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.319 13:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.319 ************************************ 00:13:10.319 START TEST nvme_xnvme 00:13:10.319 ************************************ 00:13:10.319 13:12:06 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:10.319 * Looking for test storage... 00:13:10.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:10.319 13:12:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.319 13:12:06 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.319 13:12:06 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.319 13:12:06 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.319 13:12:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.319 13:12:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.319 13:12:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.319 13:12:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:10.319 13:12:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.319 13:12:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:10.319 13:12:06 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:10.319 13:12:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.319 13:12:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.319 ************************************ 00:13:10.319 START TEST xnvme_to_malloc_dd_copy 00:13:10.319 ************************************ 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:10.319 13:12:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:10.319 { 00:13:10.319 "subsystems": [ 00:13:10.319 { 00:13:10.319 "subsystem": "bdev", 00:13:10.319 "config": [ 00:13:10.319 { 00:13:10.319 "params": { 00:13:10.319 "block_size": 512, 00:13:10.319 "num_blocks": 2097152, 00:13:10.319 "name": "malloc0" 00:13:10.319 }, 00:13:10.319 "method": "bdev_malloc_create" 00:13:10.319 }, 00:13:10.319 { 00:13:10.319 "params": { 00:13:10.319 "io_mechanism": "libaio", 00:13:10.319 "filename": "/dev/nullb0", 00:13:10.319 "name": "null0" 00:13:10.319 }, 00:13:10.319 "method": "bdev_xnvme_create" 00:13:10.319 }, 00:13:10.319 { 00:13:10.319 "method": "bdev_wait_for_examine" 00:13:10.319 } 00:13:10.319 ] 00:13:10.319 } 00:13:10.319 ] 00:13:10.319 } 00:13:10.319 [2024-07-15 13:12:07.029639] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:10.320 [2024-07-15 13:12:07.030100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84775 ] 00:13:10.577 [2024-07-15 13:12:07.181844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.577 [2024-07-15 13:12:07.282742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.349  Copying: 160/1024 [MB] (160 MBps) Copying: 320/1024 [MB] (160 MBps) Copying: 481/1024 [MB] (160 MBps) Copying: 642/1024 [MB] (160 MBps) Copying: 803/1024 [MB] (161 MBps) Copying: 963/1024 [MB] (159 MBps) Copying: 1024/1024 [MB] (average 160 MBps) 00:13:18.349 00:13:18.349 13:12:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:18.349 13:12:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:18.349 13:12:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:18.349 13:12:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:18.349 { 00:13:18.349 "subsystems": [ 00:13:18.349 { 00:13:18.349 "subsystem": "bdev", 00:13:18.349 "config": [ 00:13:18.349 { 00:13:18.349 "params": { 00:13:18.349 "block_size": 512, 00:13:18.349 "num_blocks": 2097152, 00:13:18.349 "name": "malloc0" 00:13:18.349 }, 00:13:18.349 "method": "bdev_malloc_create" 00:13:18.349 }, 00:13:18.349 { 00:13:18.349 "params": { 00:13:18.349 "io_mechanism": "libaio", 00:13:18.349 "filename": "/dev/nullb0", 00:13:18.349 "name": "null0" 00:13:18.349 }, 00:13:18.349 "method": "bdev_xnvme_create" 00:13:18.349 }, 00:13:18.349 { 00:13:18.349 "method": "bdev_wait_for_examine" 00:13:18.349 } 00:13:18.349 ] 00:13:18.349 } 00:13:18.349 ] 00:13:18.349 } 00:13:18.349 [2024-07-15 13:12:14.837865] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:18.349 [2024-07-15 13:12:14.838055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84866 ] 00:13:18.349 [2024-07-15 13:12:14.978868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.349 [2024-07-15 13:12:15.075996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.181  Copying: 154/1024 [MB] (154 MBps) Copying: 308/1024 [MB] (153 MBps) Copying: 461/1024 [MB] (153 MBps) Copying: 613/1024 [MB] (152 MBps) Copying: 768/1024 [MB] (154 MBps) Copying: 922/1024 [MB] (154 MBps) Copying: 1024/1024 [MB] (average 153 MBps) 00:13:26.181 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:26.181 13:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:26.181 { 00:13:26.181 "subsystems": [ 00:13:26.181 { 00:13:26.181 "subsystem": "bdev", 00:13:26.181 "config": [ 00:13:26.181 { 00:13:26.181 "params": { 00:13:26.181 "block_size": 512, 00:13:26.181 "num_blocks": 2097152, 00:13:26.181 "name": "malloc0" 00:13:26.181 }, 00:13:26.181 "method": "bdev_malloc_create" 00:13:26.181 }, 00:13:26.181 { 00:13:26.181 "params": { 00:13:26.181 "io_mechanism": "io_uring", 00:13:26.181 "filename": "/dev/nullb0", 00:13:26.181 "name": "null0" 00:13:26.181 }, 00:13:26.181 "method": "bdev_xnvme_create" 00:13:26.181 }, 00:13:26.181 { 00:13:26.181 "method": "bdev_wait_for_examine" 00:13:26.181 } 00:13:26.181 ] 00:13:26.181 } 00:13:26.181 ] 00:13:26.182 } 00:13:26.439 [2024-07-15 13:12:22.940947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:26.439 [2024-07-15 13:12:22.941347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84960 ] 00:13:26.439 [2024-07-15 13:12:23.092921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.702 [2024-07-15 13:12:23.196978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.110  Copying: 162/1024 [MB] (162 MBps) Copying: 325/1024 [MB] (162 MBps) Copying: 488/1024 [MB] (163 MBps) Copying: 649/1024 [MB] (161 MBps) Copying: 812/1024 [MB] (162 MBps) Copying: 976/1024 [MB] (163 MBps) Copying: 1024/1024 [MB] (average 162 MBps) 00:13:34.110 00:13:34.110 13:12:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:34.110 13:12:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:34.110 13:12:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:34.110 13:12:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:34.110 { 00:13:34.110 "subsystems": [ 00:13:34.110 { 00:13:34.110 "subsystem": "bdev", 00:13:34.110 "config": [ 00:13:34.110 { 00:13:34.110 "params": { 00:13:34.110 "block_size": 512, 00:13:34.110 "num_blocks": 2097152, 00:13:34.110 "name": "malloc0" 00:13:34.110 }, 00:13:34.110 "method": "bdev_malloc_create" 00:13:34.110 }, 00:13:34.110 { 00:13:34.110 "params": { 00:13:34.110 "io_mechanism": "io_uring", 00:13:34.110 "filename": "/dev/nullb0", 00:13:34.110 "name": "null0" 00:13:34.110 }, 00:13:34.110 "method": "bdev_xnvme_create" 00:13:34.110 }, 00:13:34.110 { 00:13:34.110 "method": "bdev_wait_for_examine" 00:13:34.110 } 00:13:34.110 ] 00:13:34.110 } 00:13:34.110 ] 00:13:34.110 } 00:13:34.110 [2024-07-15 13:12:30.642325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:34.110 [2024-07-15 13:12:30.642580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85053 ] 00:13:34.110 [2024-07-15 13:12:30.790470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.367 [2024-07-15 13:12:30.888749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.410  Copying: 166/1024 [MB] (166 MBps) Copying: 333/1024 [MB] (167 MBps) Copying: 500/1024 [MB] (167 MBps) Copying: 667/1024 [MB] (167 MBps) Copying: 834/1024 [MB] (167 MBps) Copying: 1001/1024 [MB] (166 MBps) Copying: 1024/1024 [MB] (average 166 MBps) 00:13:41.410 00:13:41.410 13:12:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:41.410 13:12:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:41.410 00:13:41.410 real 0m31.229s 00:13:41.410 user 0m25.220s 00:13:41.410 sys 0m5.502s 00:13:41.410 13:12:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.410 ************************************ 00:13:41.410 END TEST xnvme_to_malloc_dd_copy 00:13:41.410 ************************************ 00:13:41.410 13:12:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 13:12:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:41.668 13:12:38 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:41.668 13:12:38 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.668 13:12:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 ************************************ 00:13:41.668 START TEST xnvme_bdevperf 00:13:41.668 ************************************ 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:41.668 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.669 13:12:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.669 { 00:13:41.669 "subsystems": [ 00:13:41.669 { 00:13:41.669 "subsystem": "bdev", 00:13:41.669 "config": [ 00:13:41.669 { 00:13:41.669 "params": { 00:13:41.669 "io_mechanism": "libaio", 00:13:41.669 "filename": "/dev/nullb0", 00:13:41.669 "name": "null0" 00:13:41.669 }, 00:13:41.669 "method": "bdev_xnvme_create" 00:13:41.669 }, 00:13:41.669 { 00:13:41.669 "method": "bdev_wait_for_examine" 00:13:41.669 } 00:13:41.669 ] 00:13:41.669 } 00:13:41.669 ] 00:13:41.669 } 00:13:41.669 [2024-07-15 13:12:38.307381] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:41.669 [2024-07-15 13:12:38.307584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85163 ] 00:13:41.926 [2024-07-15 13:12:38.457930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.926 [2024-07-15 13:12:38.557582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.184 Running I/O for 5 seconds... 00:13:47.469 00:13:47.469 Latency(us) 00:13:47.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.469 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:47.469 null0 : 5.00 111573.28 435.83 0.00 0.00 570.01 173.15 3842.79 00:13:47.469 =================================================================================================================== 00:13:47.469 Total : 111573.28 435.83 0.00 0.00 570.01 173.15 3842.79 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.469 13:12:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.469 { 00:13:47.469 "subsystems": [ 00:13:47.469 { 00:13:47.469 "subsystem": "bdev", 00:13:47.469 "config": [ 00:13:47.469 { 00:13:47.469 "params": { 00:13:47.469 "io_mechanism": "io_uring", 00:13:47.469 "filename": "/dev/nullb0", 00:13:47.469 "name": "null0" 00:13:47.469 }, 00:13:47.469 "method": "bdev_xnvme_create" 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_wait_for_examine" 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 [2024-07-15 13:12:44.085284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:47.469 [2024-07-15 13:12:44.085494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85233 ] 00:13:47.728 [2024-07-15 13:12:44.231063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.728 [2024-07-15 13:12:44.328351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.728 Running I/O for 5 seconds... 00:13:52.988 00:13:52.988 Latency(us) 00:13:52.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.989 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:52.989 null0 : 5.00 150877.56 589.37 0.00 0.00 420.82 247.62 603.23 00:13:52.989 =================================================================================================================== 00:13:52.989 Total : 150877.56 589.37 0.00 0.00 420.82 247.62 603.23 00:13:52.989 13:12:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:52.989 13:12:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:53.248 00:13:53.248 real 0m11.543s 00:13:53.248 user 0m8.595s 00:13:53.248 sys 0m2.757s 00:13:53.248 13:12:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.248 13:12:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.248 ************************************ 00:13:53.248 END TEST xnvme_bdevperf 00:13:53.248 ************************************ 00:13:53.248 00:13:53.248 real 0m42.958s 00:13:53.248 user 0m33.894s 00:13:53.248 sys 0m8.362s 00:13:53.248 13:12:49 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.248 13:12:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.248 ************************************ 00:13:53.248 END TEST nvme_xnvme 00:13:53.248 ************************************ 00:13:53.248 13:12:49 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:53.248 13:12:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:53.248 13:12:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.248 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:13:53.248 ************************************ 00:13:53.248 START TEST blockdev_xnvme 00:13:53.248 ************************************ 00:13:53.248 13:12:49 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:53.248 * Looking for test storage... 00:13:53.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:53.248 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85366 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:53.249 13:12:49 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85366 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 85366 ']' 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:53.249 13:12:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 [2024-07-15 13:12:50.047463] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:53.506 [2024-07-15 13:12:50.047638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85366 ] 00:13:53.506 [2024-07-15 13:12:50.189337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.764 [2024-07-15 13:12:50.286335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.329 13:12:51 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:54.329 13:12:51 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:13:54.329 13:12:51 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:54.329 13:12:51 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:54.329 13:12:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:54.329 13:12:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:54.329 13:12:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:54.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:54.844 Waiting for block devices as requested 00:13:54.844 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:54.844 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:00.118 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:00.118 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.118 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:00.119 nvme0n1 00:14:00.119 nvme0n2 00:14:00.119 nvme0n3 00:14:00.119 nvme1n1 00:14:00.119 nvme2n1 00:14:00.119 nvme3n1 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 13:12:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:00.377 13:12:56 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.377 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ec2d030c-70ef-486b-8250-cee5a12ca2c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec2d030c-70ef-486b-8250-cee5a12ca2c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e1d95974-c615-4929-9611-a3384b8556d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1d95974-c615-4929-9611-a3384b8556d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "0346709c-a137-4e28-a54d-4e5340dc71e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0346709c-a137-4e28-a54d-4e5340dc71e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "466d8e15-0831-49c9-ad88-d728e63ef9bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "466d8e15-0831-49c9-ad88-d728e63ef9bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c7de1f5c-8f16-48b0-b3a2-7c1c799d5737"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c7de1f5c-8f16-48b0-b3a2-7c1c799d5737",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a6566a66-57dd-4a98-8425-84478416c11f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a6566a66-57dd-4a98-8425-84478416c11f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:00.378 13:12:56 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 85366 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 85366 ']' 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 85366 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85366 00:14:00.378 killing process with pid 85366 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85366' 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 85366 00:14:00.378 13:12:56 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 85366 00:14:00.942 13:12:57 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:00.942 13:12:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:00.942 13:12:57 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:00.942 13:12:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.942 13:12:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.942 ************************************ 00:14:00.942 START TEST bdev_hello_world 00:14:00.942 ************************************ 00:14:00.942 13:12:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:00.942 [2024-07-15 13:12:57.511614] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:00.942 [2024-07-15 13:12:57.511796] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85627 ] 00:14:00.942 [2024-07-15 13:12:57.653475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.201 [2024-07-15 13:12:57.753652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.460 [2024-07-15 13:12:57.959626] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:01.460 [2024-07-15 13:12:57.959691] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:01.460 [2024-07-15 13:12:57.959718] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:01.460 [2024-07-15 13:12:57.962272] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:01.460 [2024-07-15 13:12:57.962708] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:01.460 [2024-07-15 13:12:57.962745] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:01.460 [2024-07-15 13:12:57.962990] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:01.460 00:14:01.460 [2024-07-15 13:12:57.963053] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:01.718 00:14:01.718 real 0m0.787s 00:14:01.718 user 0m0.470s 00:14:01.718 sys 0m0.206s 00:14:01.718 13:12:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.718 13:12:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:01.718 ************************************ 00:14:01.718 END TEST bdev_hello_world 00:14:01.718 ************************************ 00:14:01.718 13:12:58 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:14:01.718 13:12:58 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:01.718 13:12:58 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:01.718 13:12:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.718 ************************************ 00:14:01.718 START TEST bdev_bounds 00:14:01.718 ************************************ 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=85658 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:01.718 Process bdevio pid: 85658 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 85658' 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 85658 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 85658 ']' 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.718 13:12:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:01.718 [2024-07-15 13:12:58.366134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:01.718 [2024-07-15 13:12:58.366344] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85658 ] 00:14:01.976 [2024-07-15 13:12:58.514635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.976 [2024-07-15 13:12:58.616412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.976 [2024-07-15 13:12:58.616506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.976 [2024-07-15 13:12:58.616426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.911 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.911 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:14:02.911 13:12:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:02.911 I/O targets: 00:14:02.911 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:02.911 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:02.911 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:02.911 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:02.911 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:02.911 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:02.911 00:14:02.911 00:14:02.911 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.911 http://cunit.sourceforge.net/ 00:14:02.911 00:14:02.911 00:14:02.911 Suite: bdevio tests on: nvme3n1 00:14:02.911 Test: blockdev write read block ...passed 00:14:02.911 Test: blockdev write zeroes read block ...passed 00:14:02.911 Test: blockdev write zeroes read no split ...passed 00:14:02.911 Test: blockdev write zeroes read split ...passed 00:14:02.911 Test: blockdev write zeroes read split partial ...passed 00:14:02.911 Test: blockdev reset ...passed 00:14:02.911 Test: blockdev write read 8 blocks ...passed 00:14:02.911 Test: blockdev write read size > 128k ...passed 00:14:02.911 Test: blockdev write read invalid size ...passed 00:14:02.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.911 Test: blockdev write read max offset ...passed 00:14:02.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.911 Test: blockdev writev readv 8 blocks ...passed 00:14:02.911 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.911 Test: blockdev writev readv block ...passed 00:14:02.911 Test: blockdev writev readv size > 128k ...passed 00:14:02.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.911 Test: blockdev comparev and writev ...passed 00:14:02.911 Test: blockdev nvme passthru rw ...passed 00:14:02.911 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.911 Test: blockdev nvme admin passthru ...passed 00:14:02.911 Test: blockdev copy ...passed 00:14:02.911 Suite: bdevio tests on: nvme2n1 00:14:02.911 Test: blockdev write read block ...passed 00:14:02.911 Test: blockdev write zeroes read block ...passed 00:14:02.911 Test: blockdev write zeroes read no split ...passed 00:14:02.911 Test: blockdev write zeroes read split ...passed 00:14:02.911 Test: blockdev write zeroes read split partial ...passed 00:14:02.911 Test: blockdev reset ...passed 00:14:02.911 Test: blockdev write read 8 blocks ...passed 00:14:02.911 Test: blockdev write read size > 128k ...passed 00:14:02.911 Test: blockdev write read invalid size ...passed 00:14:02.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.911 Test: blockdev write read max offset ...passed 00:14:02.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.911 Test: blockdev writev readv 8 blocks ...passed 00:14:02.911 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.911 Test: blockdev writev readv block ...passed 00:14:02.911 Test: blockdev writev readv size > 128k ...passed 00:14:02.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.911 Test: blockdev comparev and writev ...passed 00:14:02.911 Test: blockdev nvme passthru rw ...passed 00:14:02.911 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.911 Test: blockdev nvme admin passthru ...passed 00:14:02.911 Test: blockdev copy ...passed 00:14:02.911 Suite: bdevio tests on: nvme1n1 00:14:02.911 Test: blockdev write read block ...passed 00:14:02.911 Test: blockdev write zeroes read block ...passed 00:14:02.911 Test: blockdev write zeroes read no split ...passed 00:14:02.911 Test: blockdev write zeroes read split ...passed 00:14:02.911 Test: blockdev write zeroes read split partial ...passed 00:14:02.911 Test: blockdev reset ...passed 00:14:02.911 Test: blockdev write read 8 blocks ...passed 00:14:02.911 Test: blockdev write read size > 128k ...passed 00:14:02.911 Test: blockdev write read invalid size ...passed 00:14:02.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.911 Test: blockdev write read max offset ...passed 00:14:02.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.911 Test: blockdev writev readv 8 blocks ...passed 00:14:02.911 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.911 Test: blockdev writev readv block ...passed 00:14:02.911 Test: blockdev writev readv size > 128k ...passed 00:14:02.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.912 Test: blockdev comparev and writev ...passed 00:14:02.912 Test: blockdev nvme passthru rw ...passed 00:14:02.912 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.912 Test: blockdev nvme admin passthru ...passed 00:14:02.912 Test: blockdev copy ...passed 00:14:02.912 Suite: bdevio tests on: nvme0n3 00:14:02.912 Test: blockdev write read block ...passed 00:14:02.912 Test: blockdev write zeroes read block ...passed 00:14:02.912 Test: blockdev write zeroes read no split ...passed 00:14:02.912 Test: blockdev write zeroes read split ...passed 00:14:02.912 Test: blockdev write zeroes read split partial ...passed 00:14:02.912 Test: blockdev reset ...passed 00:14:02.912 Test: blockdev write read 8 blocks ...passed 00:14:02.912 Test: blockdev write read size > 128k ...passed 00:14:02.912 Test: blockdev write read invalid size ...passed 00:14:02.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.912 Test: blockdev write read max offset ...passed 00:14:02.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.912 Test: blockdev writev readv 8 blocks ...passed 00:14:02.912 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.912 Test: blockdev writev readv block ...passed 00:14:02.912 Test: blockdev writev readv size > 128k ...passed 00:14:02.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.912 Test: blockdev comparev and writev ...passed 00:14:02.912 Test: blockdev nvme passthru rw ...passed 00:14:02.912 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.912 Test: blockdev nvme admin passthru ...passed 00:14:02.912 Test: blockdev copy ...passed 00:14:02.912 Suite: bdevio tests on: nvme0n2 00:14:02.912 Test: blockdev write read block ...passed 00:14:02.912 Test: blockdev write zeroes read block ...passed 00:14:02.912 Test: blockdev write zeroes read no split ...passed 00:14:02.912 Test: blockdev write zeroes read split ...passed 00:14:02.912 Test: blockdev write zeroes read split partial ...passed 00:14:02.912 Test: blockdev reset ...passed 00:14:02.912 Test: blockdev write read 8 blocks ...passed 00:14:02.912 Test: blockdev write read size > 128k ...passed 00:14:02.912 Test: blockdev write read invalid size ...passed 00:14:02.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.912 Test: blockdev write read max offset ...passed 00:14:02.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.912 Test: blockdev writev readv 8 blocks ...passed 00:14:02.912 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.912 Test: blockdev writev readv block ...passed 00:14:02.912 Test: blockdev writev readv size > 128k ...passed 00:14:02.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.912 Test: blockdev comparev and writev ...passed 00:14:02.912 Test: blockdev nvme passthru rw ...passed 00:14:02.912 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.912 Test: blockdev nvme admin passthru ...passed 00:14:02.912 Test: blockdev copy ...passed 00:14:02.912 Suite: bdevio tests on: nvme0n1 00:14:02.912 Test: blockdev write read block ...passed 00:14:02.912 Test: blockdev write zeroes read block ...passed 00:14:02.912 Test: blockdev write zeroes read no split ...passed 00:14:02.912 Test: blockdev write zeroes read split ...passed 00:14:02.912 Test: blockdev write zeroes read split partial ...passed 00:14:02.912 Test: blockdev reset ...passed 00:14:02.912 Test: blockdev write read 8 blocks ...passed 00:14:02.912 Test: blockdev write read size > 128k ...passed 00:14:02.912 Test: blockdev write read invalid size ...passed 00:14:02.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.912 Test: blockdev write read max offset ...passed 00:14:02.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.912 Test: blockdev writev readv 8 blocks ...passed 00:14:02.912 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.912 Test: blockdev writev readv block ...passed 00:14:02.912 Test: blockdev writev readv size > 128k ...passed 00:14:02.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.912 Test: blockdev comparev and writev ...passed 00:14:02.912 Test: blockdev nvme passthru rw ...passed 00:14:02.912 Test: blockdev nvme passthru vendor specific ...passed 00:14:02.912 Test: blockdev nvme admin passthru ...passed 00:14:02.912 Test: blockdev copy ...passed 00:14:02.912 00:14:02.912 Run Summary: Type Total Ran Passed Failed Inactive 00:14:02.912 suites 6 6 n/a 0 0 00:14:02.912 tests 138 138 138 0 0 00:14:02.912 asserts 780 780 780 0 n/a 00:14:02.912 00:14:02.912 Elapsed time = 0.328 seconds 00:14:02.912 0 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 85658 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 85658 ']' 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 85658 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85658 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:02.912 killing process with pid 85658 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85658' 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 85658 00:14:02.912 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 85658 00:14:03.170 13:12:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:14:03.170 00:14:03.170 real 0m1.559s 00:14:03.170 user 0m3.728s 00:14:03.170 sys 0m0.379s 00:14:03.170 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:03.170 13:12:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 ************************************ 00:14:03.170 END TEST bdev_bounds 00:14:03.170 ************************************ 00:14:03.170 13:12:59 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:03.170 13:12:59 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:03.170 13:12:59 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:03.170 13:12:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 ************************************ 00:14:03.170 START TEST bdev_nbd 00:14:03.170 ************************************ 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:03.170 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=85708 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 85708 /var/tmp/spdk-nbd.sock 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85708 ']' 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:03.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:03.171 13:12:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:03.429 [2024-07-15 13:12:59.972413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:03.429 [2024-07-15 13:12:59.972628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.429 [2024-07-15 13:13:00.116332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.687 [2024-07-15 13:13:00.216063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:04.252 13:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.512 1+0 records in 00:14:04.512 1+0 records out 00:14:04.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470512 s, 8.7 MB/s 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:04.512 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.771 1+0 records in 00:14:04.771 1+0 records out 00:14:04.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486562 s, 8.4 MB/s 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:04.771 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:14:05.337 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:05.337 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:05.337 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:05.337 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:14:05.337 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.338 1+0 records in 00:14:05.338 1+0 records out 00:14:05.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528395 s, 7.8 MB/s 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:05.338 13:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.596 1+0 records in 00:14:05.596 1+0 records out 00:14:05.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693265 s, 5.9 MB/s 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:05.596 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.854 1+0 records in 00:14:05.854 1+0 records out 00:14:05.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743562 s, 5.5 MB/s 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:05.854 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.112 1+0 records in 00:14:06.112 1+0 records out 00:14:06.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825537 s, 5.0 MB/s 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:06.112 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:06.370 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd0", 00:14:06.370 "bdev_name": "nvme0n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd1", 00:14:06.370 "bdev_name": "nvme0n2" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd2", 00:14:06.370 "bdev_name": "nvme0n3" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd3", 00:14:06.370 "bdev_name": "nvme1n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd4", 00:14:06.370 "bdev_name": "nvme2n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd5", 00:14:06.370 "bdev_name": "nvme3n1" 00:14:06.370 } 00:14:06.370 ]' 00:14:06.370 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:06.370 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd0", 00:14:06.370 "bdev_name": "nvme0n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd1", 00:14:06.370 "bdev_name": "nvme0n2" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd2", 00:14:06.370 "bdev_name": "nvme0n3" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd3", 00:14:06.370 "bdev_name": "nvme1n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd4", 00:14:06.370 "bdev_name": "nvme2n1" 00:14:06.370 }, 00:14:06.370 { 00:14:06.370 "nbd_device": "/dev/nbd5", 00:14:06.370 "bdev_name": "nvme3n1" 00:14:06.370 } 00:14:06.370 ]' 00:14:06.370 13:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.370 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.638 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.899 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.156 13:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.414 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.672 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.930 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:08.497 13:13:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:08.497 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:08.497 /dev/nbd0 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.756 1+0 records in 00:14:08.756 1+0 records out 00:14:08.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390734 s, 10.5 MB/s 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:08.756 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:14:08.756 /dev/nbd1 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.088 1+0 records in 00:14:09.088 1+0 records out 00:14:09.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469111 s, 8.7 MB/s 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:09.088 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:14:09.088 /dev/nbd10 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.346 1+0 records in 00:14:09.346 1+0 records out 00:14:09.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592835 s, 6.9 MB/s 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:09.346 13:13:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:14:09.603 /dev/nbd11 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:09.603 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.604 1+0 records in 00:14:09.604 1+0 records out 00:14:09.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381187 s, 10.7 MB/s 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:09.604 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:09.862 /dev/nbd12 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.862 1+0 records in 00:14:09.862 1+0 records out 00:14:09.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009807 s, 4.2 MB/s 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:09.862 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:10.120 /dev/nbd13 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.120 1+0 records in 00:14:10.120 1+0 records out 00:14:10.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762908 s, 5.4 MB/s 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.120 13:13:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd0", 00:14:10.379 "bdev_name": "nvme0n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd1", 00:14:10.379 "bdev_name": "nvme0n2" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd10", 00:14:10.379 "bdev_name": "nvme0n3" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd11", 00:14:10.379 "bdev_name": "nvme1n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd12", 00:14:10.379 "bdev_name": "nvme2n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd13", 00:14:10.379 "bdev_name": "nvme3n1" 00:14:10.379 } 00:14:10.379 ]' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd0", 00:14:10.379 "bdev_name": "nvme0n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd1", 00:14:10.379 "bdev_name": "nvme0n2" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd10", 00:14:10.379 "bdev_name": "nvme0n3" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd11", 00:14:10.379 "bdev_name": "nvme1n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd12", 00:14:10.379 "bdev_name": "nvme2n1" 00:14:10.379 }, 00:14:10.379 { 00:14:10.379 "nbd_device": "/dev/nbd13", 00:14:10.379 "bdev_name": "nvme3n1" 00:14:10.379 } 00:14:10.379 ]' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:10.379 /dev/nbd1 00:14:10.379 /dev/nbd10 00:14:10.379 /dev/nbd11 00:14:10.379 /dev/nbd12 00:14:10.379 /dev/nbd13' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:10.379 /dev/nbd1 00:14:10.379 /dev/nbd10 00:14:10.379 /dev/nbd11 00:14:10.379 /dev/nbd12 00:14:10.379 /dev/nbd13' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:10.379 256+0 records in 00:14:10.379 256+0 records out 00:14:10.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773406 s, 136 MB/s 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:10.379 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:10.637 256+0 records in 00:14:10.637 256+0 records out 00:14:10.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151729 s, 6.9 MB/s 00:14:10.637 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:10.637 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:10.895 256+0 records in 00:14:10.895 256+0 records out 00:14:10.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152756 s, 6.9 MB/s 00:14:10.895 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:10.895 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:10.895 256+0 records in 00:14:10.895 256+0 records out 00:14:10.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163412 s, 6.4 MB/s 00:14:10.895 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:10.895 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:11.152 256+0 records in 00:14:11.152 256+0 records out 00:14:11.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154968 s, 6.8 MB/s 00:14:11.152 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:11.152 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:11.410 256+0 records in 00:14:11.410 256+0 records out 00:14:11.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164176 s, 6.4 MB/s 00:14:11.410 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:11.410 256+0 records in 00:14:11.410 256+0 records out 00:14:11.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139511 s, 7.5 MB/s 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:11.410 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.411 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.668 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.233 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:12.491 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.491 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.491 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.491 13:13:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.749 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.006 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.263 13:13:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:13.521 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:13.778 malloc_lvol_verify 00:14:13.778 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:14.051 165e7481-37eb-4a97-94ae-2a6451986ec3 00:14:14.320 13:13:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:14.320 00c09e35-9eb9-4f82-944f-9e922477c414 00:14:14.320 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:14.577 /dev/nbd0 00:14:14.834 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:14.834 mke2fs 1.46.5 (30-Dec-2021) 00:14:14.834 Discarding device blocks: 0/4096 done 00:14:14.835 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:14.835 00:14:14.835 Allocating group tables: 0/1 done 00:14:14.835 Writing inode tables: 0/1 done 00:14:14.835 Creating journal (1024 blocks): done 00:14:14.835 Writing superblocks and filesystem accounting information: 0/1 done 00:14:14.835 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.835 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:15.091 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 85708 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85708 ']' 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85708 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85708 00:14:15.092 killing process with pid 85708 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85708' 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85708 00:14:15.092 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85708 00:14:15.349 13:13:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:15.349 00:14:15.349 real 0m12.038s 00:14:15.349 user 0m17.355s 00:14:15.349 sys 0m4.312s 00:14:15.349 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.349 ************************************ 00:14:15.349 END TEST bdev_nbd 00:14:15.349 ************************************ 00:14:15.349 13:13:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:15.349 13:13:11 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:15.349 13:13:11 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:14:15.349 13:13:11 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:14:15.349 13:13:11 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:15.349 13:13:11 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:15.349 13:13:11 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.349 13:13:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.349 ************************************ 00:14:15.349 START TEST bdev_fio 00:14:15.349 ************************************ 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:15.349 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:15.349 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:14:15.350 13:13:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 ************************************ 00:14:15.350 START TEST bdev_fio_rw_verify 00:14:15.350 ************************************ 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.350 13:13:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:15.608 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:15.608 fio-3.35 00:14:15.608 Starting 6 threads 00:14:27.801 00:14:27.801 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86122: Mon Jul 15 13:13:22 2024 00:14:27.801 read: IOPS=28.2k, BW=110MiB/s (115MB/s)(1100MiB/10001msec) 00:14:27.801 slat (usec): min=3, max=1240, avg= 7.37, stdev= 5.31 00:14:27.801 clat (usec): min=83, max=5008, avg=655.86, stdev=244.93 00:14:27.801 lat (usec): min=90, max=5020, avg=663.23, stdev=245.62 00:14:27.801 clat percentiles (usec): 00:14:27.801 | 50.000th=[ 676], 99.000th=[ 1254], 99.900th=[ 1860], 99.990th=[ 3982], 00:14:27.801 | 99.999th=[ 5014] 00:14:27.801 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(1114MiB/10001msec); 0 zone resets 00:14:27.801 slat (usec): min=13, max=3076, avg=27.54, stdev=29.86 00:14:27.801 clat (usec): min=88, max=3766, avg=750.42, stdev=247.01 00:14:27.801 lat (usec): min=106, max=3803, avg=777.96, stdev=249.52 00:14:27.801 clat percentiles (usec): 00:14:27.801 | 50.000th=[ 758], 99.000th=[ 1418], 99.900th=[ 1876], 99.990th=[ 2606], 00:14:27.801 | 99.999th=[ 3392] 00:14:27.801 bw ( KiB/s): min=94656, max=140320, per=99.95%, avg=114028.63, stdev=2318.84, samples=114 00:14:27.801 iops : min=23664, max=35080, avg=28507.05, stdev=579.70, samples=114 00:14:27.801 lat (usec) : 100=0.01%, 250=2.89%, 500=18.40%, 750=34.40%, 1000=35.30% 00:14:27.801 lat (msec) : 2=8.94%, 4=0.07%, 10=0.01% 00:14:27.801 cpu : usr=59.59%, sys=26.77%, ctx=7540, majf=0, minf=25943 00:14:27.801 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.801 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.801 issued rwts: total=281670,285233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.801 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:27.801 00:14:27.802 Run status group 0 (all jobs): 00:14:27.802 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1100MiB (1154MB), run=10001-10001msec 00:14:27.802 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=1114MiB (1168MB), run=10001-10001msec 00:14:27.802 ----------------------------------------------------- 00:14:27.802 Suppressions used: 00:14:27.802 count bytes template 00:14:27.802 6 48 /usr/src/fio/parse.c 00:14:27.802 3378 324288 /usr/src/fio/iolog.c 00:14:27.802 1 8 libtcmalloc_minimal.so 00:14:27.802 1 904 libcrypto.so 00:14:27.802 ----------------------------------------------------- 00:14:27.802 00:14:27.802 00:14:27.802 real 0m11.324s 00:14:27.802 user 0m36.589s 00:14:27.802 sys 0m16.435s 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:27.802 ************************************ 00:14:27.802 END TEST bdev_fio_rw_verify 00:14:27.802 ************************************ 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ec2d030c-70ef-486b-8250-cee5a12ca2c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec2d030c-70ef-486b-8250-cee5a12ca2c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e1d95974-c615-4929-9611-a3384b8556d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1d95974-c615-4929-9611-a3384b8556d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "0346709c-a137-4e28-a54d-4e5340dc71e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0346709c-a137-4e28-a54d-4e5340dc71e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "466d8e15-0831-49c9-ad88-d728e63ef9bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "466d8e15-0831-49c9-ad88-d728e63ef9bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c7de1f5c-8f16-48b0-b3a2-7c1c799d5737"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c7de1f5c-8f16-48b0-b3a2-7c1c799d5737",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a6566a66-57dd-4a98-8425-84478416c11f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a6566a66-57dd-4a98-8425-84478416c11f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.802 /home/vagrant/spdk_repo/spdk 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:14:27.802 00:14:27.802 real 0m11.507s 00:14:27.802 user 0m36.680s 00:14:27.802 sys 0m16.520s 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.802 13:13:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:27.802 ************************************ 00:14:27.802 END TEST bdev_fio 00:14:27.802 ************************************ 00:14:27.802 13:13:23 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:27.802 13:13:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:27.802 13:13:23 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:27.802 13:13:23 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.802 13:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.802 ************************************ 00:14:27.802 START TEST bdev_verify 00:14:27.802 ************************************ 00:14:27.802 13:13:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:27.802 [2024-07-15 13:13:23.625658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:27.802 [2024-07-15 13:13:23.625875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86286 ] 00:14:27.802 [2024-07-15 13:13:23.780748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:27.802 [2024-07-15 13:13:23.884542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.802 [2024-07-15 13:13:23.884581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.802 Running I/O for 5 seconds... 00:14:33.061 00:14:33.061 Latency(us) 00:14:33.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.061 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0x80000 00:14:33.061 nvme0n1 : 5.05 1721.94 6.73 0.00 0.00 74199.66 10128.29 63867.81 00:14:33.061 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x80000 length 0x80000 00:14:33.061 nvme0n1 : 5.05 1620.85 6.33 0.00 0.00 78828.87 15013.70 69110.69 00:14:33.061 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0x80000 00:14:33.061 nvme0n2 : 5.06 1720.86 6.72 0.00 0.00 74097.32 13583.83 58148.31 00:14:33.061 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x80000 length 0x80000 00:14:33.061 nvme0n2 : 5.07 1616.71 6.32 0.00 0.00 78870.75 11856.06 70063.94 00:14:33.061 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0x80000 00:14:33.061 nvme0n3 : 5.06 1719.68 6.72 0.00 0.00 74005.86 11260.28 67680.81 00:14:33.061 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x80000 length 0x80000 00:14:33.061 nvme0n3 : 5.06 1619.79 6.33 0.00 0.00 78564.84 16801.05 70063.94 00:14:33.061 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0x20000 00:14:33.061 nvme1n1 : 5.08 1739.48 6.79 0.00 0.00 73026.43 6911.07 68634.07 00:14:33.061 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x20000 length 0x20000 00:14:33.061 nvme1n1 : 5.07 1616.05 6.31 0.00 0.00 78600.77 14298.76 65297.69 00:14:33.061 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0xbd0bd 00:14:33.061 nvme2n1 : 5.07 3084.16 12.05 0.00 0.00 41047.43 4974.78 59101.56 00:14:33.061 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:33.061 nvme2n1 : 5.07 2869.98 11.21 0.00 0.00 44062.13 5987.61 62914.56 00:14:33.061 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0x0 length 0xa0000 00:14:33.061 nvme3n1 : 5.08 1738.72 6.79 0.00 0.00 72707.18 9472.93 71493.82 00:14:33.061 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.061 Verification LBA range: start 0xa0000 length 0xa0000 00:14:33.061 nvme3n1 : 5.08 1637.81 6.40 0.00 0.00 77159.95 3395.96 68157.44 00:14:33.061 =================================================================================================================== 00:14:33.061 Total : 22706.02 88.70 0.00 0.00 67151.39 3395.96 71493.82 00:14:33.061 00:14:33.061 real 0m6.053s 00:14:33.061 user 0m9.172s 00:14:33.061 sys 0m1.879s 00:14:33.061 13:13:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.061 13:13:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:33.061 ************************************ 00:14:33.061 END TEST bdev_verify 00:14:33.061 ************************************ 00:14:33.061 13:13:29 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:33.061 13:13:29 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:33.061 13:13:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.061 13:13:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.061 ************************************ 00:14:33.061 START TEST bdev_verify_big_io 00:14:33.061 ************************************ 00:14:33.061 13:13:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:33.061 [2024-07-15 13:13:29.738119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:33.061 [2024-07-15 13:13:29.738443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86374 ] 00:14:33.319 [2024-07-15 13:13:29.890858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.319 [2024-07-15 13:13:29.995545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.319 [2024-07-15 13:13:29.995584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.576 Running I/O for 5 seconds... 00:14:40.128 00:14:40.128 Latency(us) 00:14:40.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.128 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x0 length 0x8000 00:14:40.128 nvme0n1 : 5.98 107.00 6.69 0.00 0.00 1172203.71 50283.99 1166779.11 00:14:40.128 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x8000 length 0x8000 00:14:40.128 nvme0n1 : 6.01 103.80 6.49 0.00 0.00 1206846.97 68634.07 2531834.41 00:14:40.128 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x0 length 0x8000 00:14:40.128 nvme0n2 : 5.97 117.84 7.37 0.00 0.00 1038012.72 12690.15 1052389.00 00:14:40.128 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x8000 length 0x8000 00:14:40.128 nvme0n2 : 6.00 104.05 6.50 0.00 0.00 1166731.76 47900.86 2242046.14 00:14:40.128 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x0 length 0x8000 00:14:40.128 nvme0n3 : 5.98 139.21 8.70 0.00 0.00 851529.18 36223.53 1037136.99 00:14:40.128 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x8000 length 0x8000 00:14:40.128 nvme0n3 : 6.02 143.64 8.98 0.00 0.00 814498.84 60531.43 1334551.27 00:14:40.128 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x0 length 0x2000 00:14:40.128 nvme1n1 : 5.96 115.36 7.21 0.00 0.00 1000854.05 17754.30 1555705.48 00:14:40.128 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x2000 length 0x2000 00:14:40.128 nvme1n1 : 6.00 117.33 7.33 0.00 0.00 966561.89 40274.85 1845493.76 00:14:40.128 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.128 Verification LBA range: start 0x0 length 0xbd0b 00:14:40.128 nvme2n1 : 5.98 171.26 10.70 0.00 0.00 654383.83 27763.43 606267.58 00:14:40.129 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.129 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:40.129 nvme2n1 : 6.02 115.97 7.25 0.00 0.00 946611.31 8877.15 2409818.30 00:14:40.129 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:40.129 Verification LBA range: start 0x0 length 0xa000 00:14:40.129 nvme3n1 : 5.97 117.91 7.37 0.00 0.00 924005.30 13881.72 1998013.91 00:14:40.129 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:40.129 Verification LBA range: start 0xa000 length 0xa000 00:14:40.129 nvme3n1 : 6.02 151.45 9.47 0.00 0.00 706529.04 11439.01 808356.77 00:14:40.129 =================================================================================================================== 00:14:40.129 Total : 1504.83 94.05 0.00 0.00 928033.80 8877.15 2531834.41 00:14:40.129 00:14:40.129 real 0m6.981s 00:14:40.129 user 0m12.603s 00:14:40.129 sys 0m0.576s 00:14:40.129 13:13:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.129 ************************************ 00:14:40.129 END TEST bdev_verify_big_io 00:14:40.129 13:13:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.129 ************************************ 00:14:40.129 13:13:36 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:40.129 13:13:36 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:40.129 13:13:36 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.129 13:13:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.129 ************************************ 00:14:40.129 START TEST bdev_write_zeroes 00:14:40.129 ************************************ 00:14:40.129 13:13:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:40.129 [2024-07-15 13:13:36.745939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:40.129 [2024-07-15 13:13:36.746169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86480 ] 00:14:40.387 [2024-07-15 13:13:36.888741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.387 [2024-07-15 13:13:36.987850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.645 Running I/O for 1 seconds... 00:14:41.580 00:14:41.580 Latency(us) 00:14:41.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.580 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme0n1 : 1.02 9373.60 36.62 0.00 0.00 13638.58 7268.54 21924.77 00:14:41.580 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme0n2 : 1.03 9359.68 36.56 0.00 0.00 13645.34 7298.33 21924.77 00:14:41.580 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme0n3 : 1.01 9465.50 36.97 0.00 0.00 13480.22 7298.33 22639.71 00:14:41.580 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme1n1 : 1.02 9450.84 36.92 0.00 0.00 13488.79 7298.33 22758.87 00:14:41.580 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme2n1 : 1.02 14917.07 58.27 0.00 0.00 8536.71 3723.64 15013.70 00:14:41.580 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:41.580 nvme3n1 : 1.02 9387.38 36.67 0.00 0.00 13501.86 6136.55 21924.77 00:14:41.580 =================================================================================================================== 00:14:41.580 Total : 61954.06 242.01 0.00 0.00 12343.07 3723.64 22758.87 00:14:41.838 00:14:41.838 real 0m1.871s 00:14:41.838 user 0m1.139s 00:14:41.838 sys 0m0.558s 00:14:41.838 13:13:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.838 13:13:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:41.838 ************************************ 00:14:41.838 END TEST bdev_write_zeroes 00:14:41.838 ************************************ 00:14:42.097 13:13:38 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.097 13:13:38 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:42.097 13:13:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:42.097 13:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.097 ************************************ 00:14:42.097 START TEST bdev_json_nonenclosed 00:14:42.097 ************************************ 00:14:42.097 13:13:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.097 [2024-07-15 13:13:38.673418] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:42.097 [2024-07-15 13:13:38.673587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86517 ] 00:14:42.097 [2024-07-15 13:13:38.814407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.355 [2024-07-15 13:13:38.911180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.355 [2024-07-15 13:13:38.911318] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:42.355 [2024-07-15 13:13:38.911359] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:42.355 [2024-07-15 13:13:38.911375] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:42.355 00:14:42.355 real 0m0.457s 00:14:42.355 user 0m0.231s 00:14:42.355 sys 0m0.122s 00:14:42.355 13:13:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:42.355 13:13:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:42.355 ************************************ 00:14:42.355 END TEST bdev_json_nonenclosed 00:14:42.355 ************************************ 00:14:42.613 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.613 13:13:39 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:42.613 13:13:39 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:42.613 13:13:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.613 ************************************ 00:14:42.613 START TEST bdev_json_nonarray 00:14:42.613 ************************************ 00:14:42.613 13:13:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.613 [2024-07-15 13:13:39.196299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:42.613 [2024-07-15 13:13:39.196488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86542 ] 00:14:42.613 [2024-07-15 13:13:39.346577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.871 [2024-07-15 13:13:39.440511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.871 [2024-07-15 13:13:39.440640] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:42.871 [2024-07-15 13:13:39.440679] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:42.871 [2024-07-15 13:13:39.440696] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:42.871 00:14:42.871 real 0m0.461s 00:14:42.871 user 0m0.227s 00:14:42.871 sys 0m0.129s 00:14:42.871 13:13:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:42.871 ************************************ 00:14:42.871 END TEST bdev_json_nonarray 00:14:42.871 13:13:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:42.871 ************************************ 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:42.871 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:43.129 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:43.129 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:43.129 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:43.129 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:43.129 13:13:39 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:43.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:51.486 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.486 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.486 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.486 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.486 00:14:51.486 real 0m57.605s 00:14:51.486 user 1m30.758s 00:14:51.486 sys 0m44.950s 00:14:51.486 13:13:47 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.486 ************************************ 00:14:51.486 END TEST blockdev_xnvme 00:14:51.486 ************************************ 00:14:51.486 13:13:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.486 13:13:47 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:51.486 13:13:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:51.486 13:13:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.486 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.486 ************************************ 00:14:51.486 START TEST ublk 00:14:51.486 ************************************ 00:14:51.486 13:13:47 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:51.486 * Looking for test storage... 00:14:51.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:51.486 13:13:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:51.486 13:13:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:51.486 13:13:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:51.486 13:13:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:51.486 13:13:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:51.486 13:13:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:51.486 13:13:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:51.486 13:13:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:51.486 13:13:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:51.487 13:13:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:51.487 13:13:47 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:51.487 13:13:47 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:51.487 13:13:47 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.487 13:13:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.487 ************************************ 00:14:51.487 START TEST test_save_ublk_config 00:14:51.487 ************************************ 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86836 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86836 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86836 ']' 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:51.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:51.487 13:13:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:51.487 [2024-07-15 13:13:47.755095] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:51.487 [2024-07-15 13:13:47.755297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86836 ] 00:14:51.487 [2024-07-15 13:13:47.906497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.487 [2024-07-15 13:13:48.020219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:52.053 [2024-07-15 13:13:48.564187] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:52.053 [2024-07-15 13:13:48.564597] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:52.053 malloc0 00:14:52.053 [2024-07-15 13:13:48.604369] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:52.053 [2024-07-15 13:13:48.604477] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:52.053 [2024-07-15 13:13:48.604514] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:52.053 [2024-07-15 13:13:48.604523] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:52.053 [2024-07-15 13:13:48.613312] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:52.053 [2024-07-15 13:13:48.613348] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:52.053 [2024-07-15 13:13:48.619195] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:52.053 [2024-07-15 13:13:48.619336] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:52.053 [2024-07-15 13:13:48.636178] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:52.053 0 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.053 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:52.312 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.312 13:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:52.312 "subsystems": [ 00:14:52.312 { 00:14:52.312 "subsystem": "keyring", 00:14:52.312 "config": [] 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "subsystem": "iobuf", 00:14:52.312 "config": [ 00:14:52.312 { 00:14:52.312 "method": "iobuf_set_options", 00:14:52.312 "params": { 00:14:52.312 "small_pool_count": 8192, 00:14:52.312 "large_pool_count": 1024, 00:14:52.312 "small_bufsize": 8192, 00:14:52.312 "large_bufsize": 135168 00:14:52.312 } 00:14:52.312 } 00:14:52.312 ] 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "subsystem": "sock", 00:14:52.312 "config": [ 00:14:52.312 { 00:14:52.312 "method": "sock_set_default_impl", 00:14:52.312 "params": { 00:14:52.312 "impl_name": "posix" 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "sock_impl_set_options", 00:14:52.312 "params": { 00:14:52.312 "impl_name": "ssl", 00:14:52.312 "recv_buf_size": 4096, 00:14:52.312 "send_buf_size": 4096, 00:14:52.312 "enable_recv_pipe": true, 00:14:52.312 "enable_quickack": false, 00:14:52.312 "enable_placement_id": 0, 00:14:52.312 "enable_zerocopy_send_server": true, 00:14:52.312 "enable_zerocopy_send_client": false, 00:14:52.312 "zerocopy_threshold": 0, 00:14:52.312 "tls_version": 0, 00:14:52.312 "enable_ktls": false 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "sock_impl_set_options", 00:14:52.312 "params": { 00:14:52.312 "impl_name": "posix", 00:14:52.312 "recv_buf_size": 2097152, 00:14:52.312 "send_buf_size": 2097152, 00:14:52.312 "enable_recv_pipe": true, 00:14:52.312 "enable_quickack": false, 00:14:52.312 "enable_placement_id": 0, 00:14:52.312 "enable_zerocopy_send_server": true, 00:14:52.312 "enable_zerocopy_send_client": false, 00:14:52.312 "zerocopy_threshold": 0, 00:14:52.312 "tls_version": 0, 00:14:52.312 "enable_ktls": false 00:14:52.312 } 00:14:52.312 } 00:14:52.312 ] 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "subsystem": "vmd", 00:14:52.312 "config": [] 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "subsystem": "accel", 00:14:52.312 "config": [ 00:14:52.312 { 00:14:52.312 "method": "accel_set_options", 00:14:52.312 "params": { 00:14:52.312 "small_cache_size": 128, 00:14:52.312 "large_cache_size": 16, 00:14:52.312 "task_count": 2048, 00:14:52.312 "sequence_count": 2048, 00:14:52.312 "buf_count": 2048 00:14:52.312 } 00:14:52.312 } 00:14:52.312 ] 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "subsystem": "bdev", 00:14:52.312 "config": [ 00:14:52.312 { 00:14:52.312 "method": "bdev_set_options", 00:14:52.312 "params": { 00:14:52.312 "bdev_io_pool_size": 65535, 00:14:52.312 "bdev_io_cache_size": 256, 00:14:52.312 "bdev_auto_examine": true, 00:14:52.312 "iobuf_small_cache_size": 128, 00:14:52.312 "iobuf_large_cache_size": 16 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "bdev_raid_set_options", 00:14:52.312 "params": { 00:14:52.312 "process_window_size_kb": 1024 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "bdev_iscsi_set_options", 00:14:52.312 "params": { 00:14:52.312 "timeout_sec": 30 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "bdev_nvme_set_options", 00:14:52.312 "params": { 00:14:52.312 "action_on_timeout": "none", 00:14:52.312 "timeout_us": 0, 00:14:52.312 "timeout_admin_us": 0, 00:14:52.312 "keep_alive_timeout_ms": 10000, 00:14:52.312 "arbitration_burst": 0, 00:14:52.312 "low_priority_weight": 0, 00:14:52.312 "medium_priority_weight": 0, 00:14:52.312 "high_priority_weight": 0, 00:14:52.312 "nvme_adminq_poll_period_us": 10000, 00:14:52.312 "nvme_ioq_poll_period_us": 0, 00:14:52.312 "io_queue_requests": 0, 00:14:52.312 "delay_cmd_submit": true, 00:14:52.312 "transport_retry_count": 4, 00:14:52.312 "bdev_retry_count": 3, 00:14:52.312 "transport_ack_timeout": 0, 00:14:52.312 "ctrlr_loss_timeout_sec": 0, 00:14:52.312 "reconnect_delay_sec": 0, 00:14:52.312 "fast_io_fail_timeout_sec": 0, 00:14:52.312 "disable_auto_failback": false, 00:14:52.312 "generate_uuids": false, 00:14:52.312 "transport_tos": 0, 00:14:52.312 "nvme_error_stat": false, 00:14:52.312 "rdma_srq_size": 0, 00:14:52.312 "io_path_stat": false, 00:14:52.312 "allow_accel_sequence": false, 00:14:52.312 "rdma_max_cq_size": 0, 00:14:52.312 "rdma_cm_event_timeout_ms": 0, 00:14:52.312 "dhchap_digests": [ 00:14:52.312 "sha256", 00:14:52.312 "sha384", 00:14:52.312 "sha512" 00:14:52.312 ], 00:14:52.312 "dhchap_dhgroups": [ 00:14:52.312 "null", 00:14:52.312 "ffdhe2048", 00:14:52.312 "ffdhe3072", 00:14:52.312 "ffdhe4096", 00:14:52.312 "ffdhe6144", 00:14:52.312 "ffdhe8192" 00:14:52.312 ] 00:14:52.312 } 00:14:52.312 }, 00:14:52.312 { 00:14:52.312 "method": "bdev_nvme_set_hotplug", 00:14:52.312 "params": { 00:14:52.313 "period_us": 100000, 00:14:52.313 "enable": false 00:14:52.313 } 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "method": "bdev_malloc_create", 00:14:52.313 "params": { 00:14:52.313 "name": "malloc0", 00:14:52.313 "num_blocks": 8192, 00:14:52.313 "block_size": 4096, 00:14:52.313 "physical_block_size": 4096, 00:14:52.313 "uuid": "b25d555a-d390-4056-8d1b-f6b3a4851c0a", 00:14:52.313 "optimal_io_boundary": 0 00:14:52.313 } 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "method": "bdev_wait_for_examine" 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "scsi", 00:14:52.313 "config": null 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "scheduler", 00:14:52.313 "config": [ 00:14:52.313 { 00:14:52.313 "method": "framework_set_scheduler", 00:14:52.313 "params": { 00:14:52.313 "name": "static" 00:14:52.313 } 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "vhost_scsi", 00:14:52.313 "config": [] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "vhost_blk", 00:14:52.313 "config": [] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "ublk", 00:14:52.313 "config": [ 00:14:52.313 { 00:14:52.313 "method": "ublk_create_target", 00:14:52.313 "params": { 00:14:52.313 "cpumask": "1" 00:14:52.313 } 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "method": "ublk_start_disk", 00:14:52.313 "params": { 00:14:52.313 "bdev_name": "malloc0", 00:14:52.313 "ublk_id": 0, 00:14:52.313 "num_queues": 1, 00:14:52.313 "queue_depth": 128 00:14:52.313 } 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "nbd", 00:14:52.313 "config": [] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "nvmf", 00:14:52.313 "config": [ 00:14:52.313 { 00:14:52.313 "method": "nvmf_set_config", 00:14:52.313 "params": { 00:14:52.313 "discovery_filter": "match_any", 00:14:52.313 "admin_cmd_passthru": { 00:14:52.313 "identify_ctrlr": false 00:14:52.313 } 00:14:52.313 } 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "method": "nvmf_set_max_subsystems", 00:14:52.313 "params": { 00:14:52.313 "max_subsystems": 1024 00:14:52.313 } 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "method": "nvmf_set_crdt", 00:14:52.313 "params": { 00:14:52.313 "crdt1": 0, 00:14:52.313 "crdt2": 0, 00:14:52.313 "crdt3": 0 00:14:52.313 } 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 }, 00:14:52.313 { 00:14:52.313 "subsystem": "iscsi", 00:14:52.313 "config": [ 00:14:52.313 { 00:14:52.313 "method": "iscsi_set_options", 00:14:52.313 "params": { 00:14:52.313 "node_base": "iqn.2016-06.io.spdk", 00:14:52.313 "max_sessions": 128, 00:14:52.313 "max_connections_per_session": 2, 00:14:52.313 "max_queue_depth": 64, 00:14:52.313 "default_time2wait": 2, 00:14:52.313 "default_time2retain": 20, 00:14:52.313 "first_burst_length": 8192, 00:14:52.313 "immediate_data": true, 00:14:52.313 "allow_duplicated_isid": false, 00:14:52.313 "error_recovery_level": 0, 00:14:52.313 "nop_timeout": 60, 00:14:52.313 "nop_in_interval": 30, 00:14:52.313 "disable_chap": false, 00:14:52.313 "require_chap": false, 00:14:52.313 "mutual_chap": false, 00:14:52.313 "chap_group": 0, 00:14:52.313 "max_large_datain_per_connection": 64, 00:14:52.313 "max_r2t_per_connection": 4, 00:14:52.313 "pdu_pool_size": 36864, 00:14:52.313 "immediate_data_pool_size": 16384, 00:14:52.313 "data_out_pool_size": 2048 00:14:52.313 } 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 } 00:14:52.313 ] 00:14:52.313 }' 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86836 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86836 ']' 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86836 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86836 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:52.313 killing process with pid 86836 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86836' 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86836 00:14:52.313 13:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86836 00:14:52.571 [2024-07-15 13:13:49.303742] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:52.830 [2024-07-15 13:13:49.340267] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:52.830 [2024-07-15 13:13:49.340498] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:52.830 [2024-07-15 13:13:49.349226] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:52.830 [2024-07-15 13:13:49.349291] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:52.830 [2024-07-15 13:13:49.349308] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:52.830 [2024-07-15 13:13:49.349343] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:52.830 [2024-07-15 13:13:49.349550] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:53.088 13:13:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:53.088 13:13:49 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86875 00:14:53.088 13:13:49 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86875 00:14:53.088 13:13:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:53.088 "subsystems": [ 00:14:53.088 { 00:14:53.088 "subsystem": "keyring", 00:14:53.088 "config": [] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "iobuf", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "iobuf_set_options", 00:14:53.088 "params": { 00:14:53.088 "small_pool_count": 8192, 00:14:53.088 "large_pool_count": 1024, 00:14:53.088 "small_bufsize": 8192, 00:14:53.088 "large_bufsize": 135168 00:14:53.088 } 00:14:53.088 } 00:14:53.088 ] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "sock", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "sock_set_default_impl", 00:14:53.088 "params": { 00:14:53.088 "impl_name": "posix" 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "sock_impl_set_options", 00:14:53.088 "params": { 00:14:53.088 "impl_name": "ssl", 00:14:53.088 "recv_buf_size": 4096, 00:14:53.088 "send_buf_size": 4096, 00:14:53.088 "enable_recv_pipe": true, 00:14:53.088 "enable_quickack": false, 00:14:53.088 "enable_placement_id": 0, 00:14:53.088 "enable_zerocopy_send_server": true, 00:14:53.088 "enable_zerocopy_send_client": false, 00:14:53.088 "zerocopy_threshold": 0, 00:14:53.088 "tls_version": 0, 00:14:53.088 "enable_ktls": false 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "sock_impl_set_options", 00:14:53.088 "params": { 00:14:53.088 "impl_name": "posix", 00:14:53.088 "recv_buf_size": 2097152, 00:14:53.088 "send_buf_size": 2097152, 00:14:53.088 "enable_recv_pipe": true, 00:14:53.088 "enable_quickack": false, 00:14:53.088 "enable_placement_id": 0, 00:14:53.088 "enable_zerocopy_send_server": true, 00:14:53.088 "enable_zerocopy_send_client": false, 00:14:53.088 "zerocopy_threshold": 0, 00:14:53.088 "tls_version": 0, 00:14:53.088 "enable_ktls": false 00:14:53.088 } 00:14:53.088 } 00:14:53.088 ] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "vmd", 00:14:53.088 "config": [] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "accel", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "accel_set_options", 00:14:53.088 "params": { 00:14:53.088 "small_cache_size": 128, 00:14:53.088 "large_cache_size": 16, 00:14:53.088 "task_count": 2048, 00:14:53.088 "sequence_count": 2048, 00:14:53.088 "buf_count": 2048 00:14:53.088 } 00:14:53.088 } 00:14:53.088 ] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "bdev", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "bdev_set_options", 00:14:53.088 "params": { 00:14:53.088 "bdev_io_pool_size": 65535, 00:14:53.088 "bdev_io_cache_size": 256, 00:14:53.088 "bdev_auto_examine": true, 00:14:53.088 "iobuf_small_cache_size": 128, 00:14:53.088 "iobuf_large_cache_size": 16 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_raid_set_options", 00:14:53.088 "params": { 00:14:53.088 "process_window_size_kb": 1024 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_iscsi_set_options", 00:14:53.088 "params": { 00:14:53.088 "timeout_sec": 30 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_nvme_set_options", 00:14:53.088 "params": { 00:14:53.088 "action_on_timeout": "none", 00:14:53.088 "timeout_us": 0, 00:14:53.088 "timeout_admin_us": 0, 00:14:53.088 "keep_alive_timeout_ms": 10000, 00:14:53.088 "arbitration_burst": 0, 00:14:53.088 "low_priority_weight": 0, 00:14:53.088 "medium_priority_weight": 0, 00:14:53.088 "high_priority_weight": 0, 00:14:53.088 "nvme_adminq_poll_period_us": 10000, 00:14:53.088 "nvme_ioq_poll_period_us": 0, 00:14:53.088 "io_queue_requests": 0, 00:14:53.088 "delay_cmd_submit": true, 00:14:53.088 "transport_retry_count": 4, 00:14:53.088 "bdev_retry_count": 3, 00:14:53.088 "transport_ack_timeout": 0, 00:14:53.088 "ctrlr_loss_timeout_sec": 0, 00:14:53.088 "reconnect_delay_sec": 0, 00:14:53.088 "fast_io_fail_timeout_sec": 0, 00:14:53.088 "disable_auto_failback": false, 00:14:53.088 "generate_uuids": false, 00:14:53.088 "transport_tos": 0, 00:14:53.088 "nvme_error_stat": false, 00:14:53.088 "rdma_srq_size": 0, 00:14:53.088 "io_path_stat": false, 00:14:53.088 "allow_accel_sequence": false, 00:14:53.088 "rdma_max_cq_size": 0, 00:14:53.088 "rdma_cm_event_timeout_ms": 0, 00:14:53.088 "dhchap_digests": [ 00:14:53.088 "sha256", 00:14:53.088 "sha384", 00:14:53.088 "sha512" 00:14:53.088 ], 00:14:53.088 "dhchap_dhgroups": [ 00:14:53.088 "null", 00:14:53.088 "ffdhe2048", 00:14:53.088 "ffdhe3072", 00:14:53.088 "ffdhe4096", 00:14:53.088 "ffdhe6144", 00:14:53.088 "ffdhe8192" 00:14:53.088 ] 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_nvme_set_hotplug", 00:14:53.088 "params": { 00:14:53.088 "period_us": 100000, 00:14:53.088 "enable": false 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_malloc_create", 00:14:53.088 "params": { 00:14:53.088 "name": "malloc0", 00:14:53.088 "num_blocks": 8192, 00:14:53.088 "block_size": 4096, 00:14:53.088 "physical_block_size": 4096, 00:14:53.088 "uuid": "b25d555a-d390-4056-8d1b-f6b3a4851c0a", 00:14:53.088 "optimal_io_boundary": 0 00:14:53.088 } 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "method": "bdev_wait_for_examine" 00:14:53.088 } 00:14:53.088 ] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "scsi", 00:14:53.088 "config": null 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "scheduler", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "framework_set_scheduler", 00:14:53.088 "params": { 00:14:53.088 "name": "static" 00:14:53.088 } 00:14:53.088 } 00:14:53.088 ] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "vhost_scsi", 00:14:53.088 "config": [] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "vhost_blk", 00:14:53.088 "config": [] 00:14:53.088 }, 00:14:53.088 { 00:14:53.088 "subsystem": "ublk", 00:14:53.088 "config": [ 00:14:53.088 { 00:14:53.088 "method": "ublk_create_target", 00:14:53.089 "params": { 00:14:53.089 "cpumask": "1" 00:14:53.089 } 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "method": "ublk_start_disk", 00:14:53.089 "params": { 00:14:53.089 "bdev_name": "malloc0", 00:14:53.089 "ublk_id": 0, 00:14:53.089 "num_queues": 1, 00:14:53.089 "queue_depth": 128 00:14:53.089 } 00:14:53.089 } 00:14:53.089 ] 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "subsystem": "nbd", 00:14:53.089 "config": [] 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "subsystem": "nvmf", 00:14:53.089 "config": [ 00:14:53.089 { 00:14:53.089 "method": "nvmf_set_config", 00:14:53.089 "params": { 00:14:53.089 "discovery_filter": "match_any", 00:14:53.089 "admin_cmd_passthru": { 00:14:53.089 "identify_ctrlr": false 00:14:53.089 } 00:14:53.089 } 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "method": "nvmf_set_max_subsystems", 00:14:53.089 "params": { 00:14:53.089 "max_subsystems": 1024 00:14:53.089 } 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "method": "nvmf_set_crdt", 00:14:53.089 "params": { 00:14:53.089 "crdt1": 0, 00:14:53.089 "crdt2": 0, 00:14:53.089 "crdt3": 0 00:14:53.089 } 00:14:53.089 } 00:14:53.089 ] 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "subsystem": "iscsi", 00:14:53.089 "config": [ 00:14:53.089 { 00:14:53.089 "method": "iscsi_set_options", 00:14:53.089 "params": { 00:14:53.089 "node_base": "iqn.2016-06.io.spdk", 00:14:53.089 "max_sessions": 128, 00:14:53.089 "max_connections_per_session": 2, 00:14:53.089 "max_queue_depth": 64, 00:14:53.089 "default_time2wait": 2, 00:14:53.089 "default_time2retain": 20, 00:14:53.089 "first_burst_length": 8192, 00:14:53.089 "immediate_data": true, 00:14:53.089 "allow_duplicated_isid": false, 00:14:53.089 "error_recovery_level": 0, 00:14:53.089 "nop_timeout": 60, 00:14:53.089 "nop_in_interval": 30, 00:14:53.089 "disable_chap": false, 00:14:53.089 "require_chap": false, 00:14:53.089 "mutual_chap": false, 00:14:53.089 "chap_group": 0, 00:14:53.089 "max_large_datain_per_connection": 64, 00:14:53.089 "max_r2t_per_connection": 4, 00:14:53.089 "pdu_pool_size": 36864, 00:14:53.089 "immediate_data_pool_size": 16384, 00:14:53.089 "data_out_pool_size": 2048 00:14:53.089 } 00:14:53.089 } 00:14:53.089 ] 00:14:53.089 } 00:14:53.089 ] 00:14:53.089 }' 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86875 ']' 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.089 13:13:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:53.089 [2024-07-15 13:13:49.762059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:53.089 [2024-07-15 13:13:49.762258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86875 ] 00:14:53.358 [2024-07-15 13:13:49.913480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.358 [2024-07-15 13:13:50.020371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.940 [2024-07-15 13:13:50.400168] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:53.941 [2024-07-15 13:13:50.400578] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:53.941 [2024-07-15 13:13:50.409384] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:53.941 [2024-07-15 13:13:50.409485] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:53.941 [2024-07-15 13:13:50.409502] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:53.941 [2024-07-15 13:13:50.409520] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:53.941 [2024-07-15 13:13:50.419256] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:53.941 [2024-07-15 13:13:50.419296] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:53.941 [2024-07-15 13:13:50.426186] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:53.941 [2024-07-15 13:13:50.426318] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:53.941 [2024-07-15 13:13:50.443181] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86875 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86875 ']' 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86875 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86875 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:54.199 killing process with pid 86875 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86875' 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86875 00:14:54.199 13:13:50 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86875 00:14:54.457 [2024-07-15 13:13:51.124214] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:54.457 [2024-07-15 13:13:51.156319] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:54.457 [2024-07-15 13:13:51.156506] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:54.457 [2024-07-15 13:13:51.164216] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:54.457 [2024-07-15 13:13:51.164278] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:54.457 [2024-07-15 13:13:51.164291] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:54.457 [2024-07-15 13:13:51.164325] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:54.457 [2024-07-15 13:13:51.164509] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:55.022 13:13:51 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:55.022 00:14:55.022 real 0m3.870s 00:14:55.022 user 0m2.999s 00:14:55.022 sys 0m1.734s 00:14:55.022 13:13:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.022 13:13:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:55.022 ************************************ 00:14:55.022 END TEST test_save_ublk_config 00:14:55.022 ************************************ 00:14:55.022 13:13:51 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86927 00:14:55.022 13:13:51 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:55.022 13:13:51 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.022 13:13:51 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86927 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@827 -- # '[' -z 86927 ']' 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.022 13:13:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:55.022 [2024-07-15 13:13:51.616202] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:55.022 [2024-07-15 13:13:51.616362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86927 ] 00:14:55.280 [2024-07-15 13:13:51.762871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:55.280 [2024-07-15 13:13:51.861138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.280 [2024-07-15 13:13:51.861199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.845 13:13:52 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.845 13:13:52 ublk -- common/autotest_common.sh@860 -- # return 0 00:14:55.845 13:13:52 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:55.845 13:13:52 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:55.845 13:13:52 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.845 13:13:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.103 ************************************ 00:14:56.103 START TEST test_create_ublk 00:14:56.103 ************************************ 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.103 [2024-07-15 13:13:52.601179] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:56.103 [2024-07-15 13:13:52.603044] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.103 [2024-07-15 13:13:52.689332] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:56.103 [2024-07-15 13:13:52.689914] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:56.103 [2024-07-15 13:13:52.689953] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:56.103 [2024-07-15 13:13:52.689976] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.103 [2024-07-15 13:13:52.698559] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.103 [2024-07-15 13:13:52.698590] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.103 [2024-07-15 13:13:52.705193] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.103 [2024-07-15 13:13:52.712270] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:56.103 [2024-07-15 13:13:52.723297] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.103 13:13:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:56.103 { 00:14:56.103 "ublk_device": "/dev/ublkb0", 00:14:56.103 "id": 0, 00:14:56.103 "queue_depth": 512, 00:14:56.103 "num_queues": 4, 00:14:56.103 "bdev_name": "Malloc0" 00:14:56.103 } 00:14:56.103 ]' 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:56.103 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:56.362 13:13:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:56.362 13:13:53 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:56.362 13:13:53 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:56.362 13:13:53 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:56.621 fio: verification read phase will never start because write phase uses all of runtime 00:14:56.621 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:56.621 fio-3.35 00:14:56.621 Starting 1 process 00:15:06.670 00:15:06.670 fio_test: (groupid=0, jobs=1): err= 0: pid=86971: Mon Jul 15 13:14:03 2024 00:15:06.670 write: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(438MiB/10001msec); 0 zone resets 00:15:06.670 clat (usec): min=57, max=4007, avg=87.69, stdev=127.32 00:15:06.670 lat (usec): min=58, max=4008, avg=88.47, stdev=127.33 00:15:06.670 clat percentiles (usec): 00:15:06.670 | 1.00th=[ 62], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:15:06.670 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:15:06.670 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 90], 95.00th=[ 94], 00:15:06.670 | 99.00th=[ 113], 99.50th=[ 128], 99.90th=[ 2737], 99.95th=[ 3163], 00:15:06.670 | 99.99th=[ 3621] 00:15:06.670 bw ( KiB/s): min=42944, max=50016, per=100.00%, avg=44869.89, stdev=1559.22, samples=19 00:15:06.670 iops : min=10736, max=12504, avg=11217.47, stdev=389.80, samples=19 00:15:06.670 lat (usec) : 100=97.74%, 250=1.94%, 500=0.01%, 750=0.02%, 1000=0.02% 00:15:06.670 lat (msec) : 2=0.09%, 4=0.18%, 10=0.01% 00:15:06.670 cpu : usr=2.97%, sys=8.52%, ctx=112130, majf=0, minf=795 00:15:06.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.670 issued rwts: total=0,112130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.670 00:15:06.670 Run status group 0 (all jobs): 00:15:06.670 WRITE: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=438MiB (459MB), run=10001-10001msec 00:15:06.670 00:15:06.670 Disk stats (read/write): 00:15:06.670 ublkb0: ios=0/110983, merge=0/0, ticks=0/8813, in_queue=8813, util=99.10% 00:15:06.670 13:14:03 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:06.670 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.670 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.670 [2024-07-15 13:14:03.251650] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:06.671 [2024-07-15 13:14:03.297234] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:06.671 [2024-07-15 13:14:03.298467] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:06.671 [2024-07-15 13:14:03.305196] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:06.671 [2024-07-15 13:14:03.305531] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:06.671 [2024-07-15 13:14:03.305559] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.671 13:14:03 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.671 [2024-07-15 13:14:03.329320] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:06.671 request: 00:15:06.671 { 00:15:06.671 "ublk_id": 0, 00:15:06.671 "method": "ublk_stop_disk", 00:15:06.671 "req_id": 1 00:15:06.671 } 00:15:06.671 Got JSON-RPC error response 00:15:06.671 response: 00:15:06.671 { 00:15:06.671 "code": -19, 00:15:06.671 "message": "No such device" 00:15:06.671 } 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.671 13:14:03 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.671 [2024-07-15 13:14:03.345298] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:06.671 [2024-07-15 13:14:03.347405] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:06.671 [2024-07-15 13:14:03.347468] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.671 13:14:03 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.671 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.933 13:14:03 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:06.933 ************************************ 00:15:06.933 END TEST test_create_ublk 00:15:06.933 ************************************ 00:15:06.933 13:14:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:06.933 00:15:06.933 real 0m10.945s 00:15:06.933 user 0m0.741s 00:15:06.933 sys 0m0.958s 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.933 13:14:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 13:14:03 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:06.933 13:14:03 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:06.933 13:14:03 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.933 13:14:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 ************************************ 00:15:06.933 START TEST test_create_multi_ublk 00:15:06.933 ************************************ 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 [2024-07-15 13:14:03.593168] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:06.933 [2024-07-15 13:14:03.595010] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.933 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.191 [2024-07-15 13:14:03.689360] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:07.191 [2024-07-15 13:14:03.689902] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:07.191 [2024-07-15 13:14:03.689939] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:07.191 [2024-07-15 13:14:03.689954] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:07.191 [2024-07-15 13:14:03.698485] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:07.191 [2024-07-15 13:14:03.698521] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:07.191 [2024-07-15 13:14:03.705179] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:07.191 [2024-07-15 13:14:03.706047] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:07.191 [2024-07-15 13:14:03.727200] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.191 [2024-07-15 13:14:03.822370] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:07.191 [2024-07-15 13:14:03.822894] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:07.191 [2024-07-15 13:14:03.822922] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:07.191 [2024-07-15 13:14:03.822934] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:07.191 [2024-07-15 13:14:03.830208] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:07.191 [2024-07-15 13:14:03.830238] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:07.191 [2024-07-15 13:14:03.838200] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:07.191 [2024-07-15 13:14:03.839049] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:07.191 [2024-07-15 13:14:03.847244] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.191 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 [2024-07-15 13:14:03.942332] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:07.450 [2024-07-15 13:14:03.942920] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:07.450 [2024-07-15 13:14:03.942944] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:07.450 [2024-07-15 13:14:03.942959] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:07.450 [2024-07-15 13:14:03.950200] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:07.450 [2024-07-15 13:14:03.950235] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:07.450 [2024-07-15 13:14:03.958184] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:07.450 [2024-07-15 13:14:03.958989] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:07.450 [2024-07-15 13:14:03.967236] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.450 13:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 [2024-07-15 13:14:04.062325] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:07.450 [2024-07-15 13:14:04.062862] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:07.450 [2024-07-15 13:14:04.062891] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:07.450 [2024-07-15 13:14:04.062903] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:07.450 [2024-07-15 13:14:04.071521] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:07.450 [2024-07-15 13:14:04.071552] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:07.450 [2024-07-15 13:14:04.078198] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:07.450 [2024-07-15 13:14:04.079076] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:07.450 [2024-07-15 13:14:04.088287] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:07.450 { 00:15:07.450 "ublk_device": "/dev/ublkb0", 00:15:07.450 "id": 0, 00:15:07.450 "queue_depth": 512, 00:15:07.450 "num_queues": 4, 00:15:07.450 "bdev_name": "Malloc0" 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "ublk_device": "/dev/ublkb1", 00:15:07.450 "id": 1, 00:15:07.450 "queue_depth": 512, 00:15:07.450 "num_queues": 4, 00:15:07.450 "bdev_name": "Malloc1" 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "ublk_device": "/dev/ublkb2", 00:15:07.450 "id": 2, 00:15:07.450 "queue_depth": 512, 00:15:07.450 "num_queues": 4, 00:15:07.450 "bdev_name": "Malloc2" 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "ublk_device": "/dev/ublkb3", 00:15:07.450 "id": 3, 00:15:07.450 "queue_depth": 512, 00:15:07.450 "num_queues": 4, 00:15:07.450 "bdev_name": "Malloc3" 00:15:07.450 } 00:15:07.450 ]' 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:07.450 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:07.708 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:07.966 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:08.224 13:14:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.481 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 [2024-07-15 13:14:05.116384] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.482 [2024-07-15 13:14:05.148781] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.482 [2024-07-15 13:14:05.153499] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.482 [2024-07-15 13:14:05.160195] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.482 [2024-07-15 13:14:05.160593] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:08.482 [2024-07-15 13:14:05.160619] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:08.482 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.482 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.482 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:08.482 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.482 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.482 [2024-07-15 13:14:05.166318] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.482 [2024-07-15 13:14:05.203797] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.482 [2024-07-15 13:14:05.208544] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.482 [2024-07-15 13:14:05.216304] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.482 [2024-07-15 13:14:05.216681] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:08.482 [2024-07-15 13:14:05.216708] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:08.739 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.739 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.739 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:08.739 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.739 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.739 [2024-07-15 13:14:05.227341] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.739 [2024-07-15 13:14:05.263829] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.740 [2024-07-15 13:14:05.265262] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.740 [2024-07-15 13:14:05.275219] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.740 [2024-07-15 13:14:05.275607] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:08.740 [2024-07-15 13:14:05.275633] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.740 [2024-07-15 13:14:05.281336] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.740 [2024-07-15 13:14:05.322236] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.740 [2024-07-15 13:14:05.327625] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.740 [2024-07-15 13:14:05.335356] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.740 [2024-07-15 13:14:05.335725] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:08.740 [2024-07-15 13:14:05.335753] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.740 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:08.997 [2024-07-15 13:14:05.619286] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:08.997 [2024-07-15 13:14:05.625027] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:08.997 [2024-07-15 13:14:05.625087] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.997 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:09.255 13:14:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:09.513 ************************************ 00:15:09.513 END TEST test_create_multi_ublk 00:15:09.513 ************************************ 00:15:09.513 13:14:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:09.513 00:15:09.513 real 0m2.436s 00:15:09.513 user 0m1.268s 00:15:09.513 sys 0m0.187s 00:15:09.513 13:14:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.513 13:14:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.513 13:14:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:09.513 13:14:06 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:09.513 13:14:06 ublk -- ublk/ublk.sh@130 -- # killprocess 86927 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@946 -- # '[' -z 86927 ']' 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@950 -- # kill -0 86927 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@951 -- # uname 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86927 00:15:09.513 killing process with pid 86927 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86927' 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@965 -- # kill 86927 00:15:09.513 13:14:06 ublk -- common/autotest_common.sh@970 -- # wait 86927 00:15:09.770 [2024-07-15 13:14:06.259480] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.770 [2024-07-15 13:14:06.259583] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:10.029 00:15:10.029 real 0m19.056s 00:15:10.029 user 0m30.641s 00:15:10.029 sys 0m7.352s 00:15:10.029 13:14:06 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.029 13:14:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.029 ************************************ 00:15:10.029 END TEST ublk 00:15:10.029 ************************************ 00:15:10.029 13:14:06 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:10.029 13:14:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:10.029 13:14:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.029 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:15:10.029 ************************************ 00:15:10.029 START TEST ublk_recovery 00:15:10.029 ************************************ 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:10.029 * Looking for test storage... 00:15:10.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:10.029 13:14:06 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87267 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:10.029 13:14:06 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87267 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87267 ']' 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:10.029 13:14:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.287 [2024-07-15 13:14:06.795770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:10.287 [2024-07-15 13:14:06.796399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87267 ] 00:15:10.287 [2024-07-15 13:14:06.948416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:10.545 [2024-07-15 13:14:07.046042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.545 [2024-07-15 13:14:07.046076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:11.110 13:14:07 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.110 [2024-07-15 13:14:07.745175] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:11.110 [2024-07-15 13:14:07.746994] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.110 13:14:07 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.110 malloc0 00:15:11.110 13:14:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.111 13:14:07 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:11.111 13:14:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.111 13:14:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.111 [2024-07-15 13:14:07.793644] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:11.111 [2024-07-15 13:14:07.793782] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:11.111 [2024-07-15 13:14:07.793806] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:11.111 [2024-07-15 13:14:07.793831] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:11.111 [2024-07-15 13:14:07.801367] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:11.111 [2024-07-15 13:14:07.801402] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:11.111 [2024-07-15 13:14:07.809190] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:11.111 [2024-07-15 13:14:07.809408] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:11.111 [2024-07-15 13:14:07.832204] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:11.111 1 00:15:11.111 13:14:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.111 13:14:07 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:12.485 13:14:08 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87305 00:15:12.485 13:14:08 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:12.485 13:14:08 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:12.485 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:12.485 fio-3.35 00:15:12.485 Starting 1 process 00:15:17.747 13:14:13 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87267 00:15:17.747 13:14:13 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:23.010 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87267 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:23.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.010 13:14:18 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87410 00:15:23.010 13:14:18 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:23.010 13:14:18 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:23.010 13:14:18 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87410 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87410 ']' 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:23.010 13:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.010 [2024-07-15 13:14:18.988086] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:23.010 [2024-07-15 13:14:18.988543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87410 ] 00:15:23.010 [2024-07-15 13:14:19.132263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:23.010 [2024-07-15 13:14:19.246725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.010 [2024-07-15 13:14:19.246768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:23.271 13:14:19 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.271 [2024-07-15 13:14:19.939186] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:23.271 [2024-07-15 13:14:19.941045] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.271 13:14:19 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.271 malloc0 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.271 13:14:19 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.271 13:14:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.271 [2024-07-15 13:14:19.995390] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:23.271 [2024-07-15 13:14:19.995462] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:23.271 [2024-07-15 13:14:19.995479] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:23.271 [2024-07-15 13:14:20.003263] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:23.271 [2024-07-15 13:14:20.003332] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:23.271 1 00:15:23.271 [2024-07-15 13:14:20.003452] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:23.271 13:14:20 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.271 13:14:20 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87305 00:15:23.529 [2024-07-15 13:14:20.011218] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:23.529 [2024-07-15 13:14:20.019012] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:23.529 [2024-07-15 13:14:20.026538] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:23.529 [2024-07-15 13:14:20.026584] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:19.735 00:16:19.735 fio_test: (groupid=0, jobs=1): err= 0: pid=87308: Mon Jul 15 13:15:09 2024 00:16:19.735 read: IOPS=18.1k, BW=70.8MiB/s (74.2MB/s)(4248MiB/60003msec) 00:16:19.735 slat (usec): min=2, max=209, avg= 6.68, stdev= 2.65 00:16:19.735 clat (usec): min=1203, max=6189.2k, avg=3464.30, stdev=47067.73 00:16:19.735 lat (usec): min=1223, max=6189.3k, avg=3470.98, stdev=47067.74 00:16:19.735 clat percentiles (usec): 00:16:19.735 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2900], 00:16:19.735 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:16:19.735 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 4080], 00:16:19.735 | 99.00th=[ 5735], 99.50th=[ 6456], 99.90th=[ 8094], 99.95th=[ 9110], 00:16:19.735 | 99.99th=[13829] 00:16:19.735 bw ( KiB/s): min= 2328, max=84080, per=100.00%, avg=79888.39, stdev=9897.41, samples=108 00:16:19.735 iops : min= 582, max=21020, avg=19972.08, stdev=2474.35, samples=108 00:16:19.735 write: IOPS=18.1k, BW=70.8MiB/s (74.2MB/s)(4246MiB/60003msec); 0 zone resets 00:16:19.735 slat (usec): min=2, max=264, avg= 6.76, stdev= 2.74 00:16:19.735 clat (usec): min=1123, max=6189.4k, avg=3584.81, stdev=47823.73 00:16:19.735 lat (usec): min=1131, max=6189.4k, avg=3591.57, stdev=47823.73 00:16:19.735 clat percentiles (usec): 00:16:19.735 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:16:19.735 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:16:19.735 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3359], 95.00th=[ 4047], 00:16:19.735 | 99.00th=[ 5735], 99.50th=[ 6587], 99.90th=[ 8094], 99.95th=[ 8979], 00:16:19.735 | 99.99th=[13960] 00:16:19.735 bw ( KiB/s): min= 2560, max=84536, per=100.00%, avg=79838.55, stdev=9888.20, samples=108 00:16:19.735 iops : min= 640, max=21134, avg=19959.61, stdev=2472.07, samples=108 00:16:19.735 lat (msec) : 2=0.05%, 4=94.68%, 10=5.23%, 20=0.03%, >=2000=0.01% 00:16:19.735 cpu : usr=10.68%, sys=22.77%, ctx=68147, majf=0, minf=13 00:16:19.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:19.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.735 issued rwts: total=1087446,1086873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.735 00:16:19.735 Run status group 0 (all jobs): 00:16:19.735 READ: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=4248MiB (4454MB), run=60003-60003msec 00:16:19.735 WRITE: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=4246MiB (4452MB), run=60003-60003msec 00:16:19.735 00:16:19.735 Disk stats (read/write): 00:16:19.735 ublkb1: ios=1085121/1084530, merge=0/0, ticks=3658255/3658929, in_queue=7317184, util=99.94% 00:16:19.735 13:15:09 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 [2024-07-15 13:15:09.107837] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.736 [2024-07-15 13:15:09.143246] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.736 [2024-07-15 13:15:09.143593] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.736 [2024-07-15 13:15:09.151216] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.736 [2024-07-15 13:15:09.151380] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:19.736 [2024-07-15 13:15:09.151395] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.736 13:15:09 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 [2024-07-15 13:15:09.167355] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.736 [2024-07-15 13:15:09.169523] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:19.736 [2024-07-15 13:15:09.169590] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.736 13:15:09 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:19.736 13:15:09 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:19.736 13:15:09 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87410 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 87410 ']' 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 87410 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87410 00:16:19.736 killing process with pid 87410 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87410' 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@965 -- # kill 87410 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@970 -- # wait 87410 00:16:19.736 [2024-07-15 13:15:09.375960] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.736 [2024-07-15 13:15:09.376052] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:19.736 ************************************ 00:16:19.736 END TEST ublk_recovery 00:16:19.736 ************************************ 00:16:19.736 00:16:19.736 real 1m3.104s 00:16:19.736 user 1m44.487s 00:16:19.736 sys 0m31.429s 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:19.736 13:15:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 13:15:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:19.736 13:15:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.736 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 13:15:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:16:19.736 13:15:09 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:19.736 13:15:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:19.736 13:15:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.736 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 ************************************ 00:16:19.736 START TEST ftl 00:16:19.736 ************************************ 00:16:19.736 13:15:09 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:19.736 * Looking for test storage... 00:16:19.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:19.736 13:15:09 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:19.736 13:15:09 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.736 13:15:09 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.736 13:15:09 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:19.736 13:15:09 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:19.736 13:15:09 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.736 13:15:09 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.736 13:15:09 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.736 13:15:09 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.736 13:15:09 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.736 13:15:09 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:19.736 13:15:09 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:19.736 13:15:09 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.736 13:15:09 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.736 13:15:09 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:19.736 13:15:09 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.736 13:15:09 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.736 13:15:09 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.736 13:15:09 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.736 13:15:09 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:19.736 13:15:09 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:19.736 13:15:09 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.736 13:15:09 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:19.736 13:15:09 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:19.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:19.736 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.736 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.736 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.736 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.736 13:15:10 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88180 00:16:19.736 13:15:10 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:19.736 13:15:10 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88180 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@827 -- # '[' -z 88180 ']' 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:19.736 13:15:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:19.736 [2024-07-15 13:15:10.472492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:19.736 [2024-07-15 13:15:10.472931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88180 ] 00:16:19.736 [2024-07-15 13:15:10.618420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.736 [2024-07-15 13:15:10.719415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.736 13:15:11 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.736 13:15:11 ftl -- common/autotest_common.sh@860 -- # return 0 00:16:19.736 13:15:11 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:19.736 13:15:11 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@50 -- # break 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:19.736 13:15:12 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:19.736 13:15:13 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:19.736 13:15:13 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:19.736 13:15:13 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:19.736 13:15:13 ftl -- ftl/ftl.sh@63 -- # break 00:16:19.736 13:15:13 ftl -- ftl/ftl.sh@66 -- # killprocess 88180 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@946 -- # '[' -z 88180 ']' 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@950 -- # kill -0 88180 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@951 -- # uname 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88180 00:16:19.737 killing process with pid 88180 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88180' 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@965 -- # kill 88180 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@970 -- # wait 88180 00:16:19.737 13:15:13 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:19.737 13:15:13 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.737 13:15:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:19.737 ************************************ 00:16:19.737 START TEST ftl_fio_basic 00:16:19.737 ************************************ 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:19.737 * Looking for test storage... 00:16:19.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88293 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88293 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 88293 ']' 00:16:19.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:19.737 13:15:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.737 [2024-07-15 13:15:13.988083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:19.737 [2024-07-15 13:15:13.988284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88293 ] 00:16:19.737 [2024-07-15 13:15:14.137288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:19.737 [2024-07-15 13:15:14.235629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.737 [2024-07-15 13:15:14.235686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.737 [2024-07-15 13:15:14.235735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:19.737 13:15:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:19.737 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:19.737 { 00:16:19.737 "name": "nvme0n1", 00:16:19.737 "aliases": [ 00:16:19.737 "7435d5c2-6c30-472d-80f9-ac791e0e2edc" 00:16:19.737 ], 00:16:19.737 "product_name": "NVMe disk", 00:16:19.737 "block_size": 4096, 00:16:19.737 "num_blocks": 1310720, 00:16:19.737 "uuid": "7435d5c2-6c30-472d-80f9-ac791e0e2edc", 00:16:19.737 "assigned_rate_limits": { 00:16:19.737 "rw_ios_per_sec": 0, 00:16:19.737 "rw_mbytes_per_sec": 0, 00:16:19.737 "r_mbytes_per_sec": 0, 00:16:19.737 "w_mbytes_per_sec": 0 00:16:19.737 }, 00:16:19.737 "claimed": false, 00:16:19.737 "zoned": false, 00:16:19.737 "supported_io_types": { 00:16:19.737 "read": true, 00:16:19.737 "write": true, 00:16:19.737 "unmap": true, 00:16:19.737 "write_zeroes": true, 00:16:19.737 "flush": true, 00:16:19.737 "reset": true, 00:16:19.737 "compare": true, 00:16:19.737 "compare_and_write": false, 00:16:19.737 "abort": true, 00:16:19.737 "nvme_admin": true, 00:16:19.737 "nvme_io": true 00:16:19.737 }, 00:16:19.737 "driver_specific": { 00:16:19.737 "nvme": [ 00:16:19.737 { 00:16:19.737 "pci_address": "0000:00:11.0", 00:16:19.737 "trid": { 00:16:19.737 "trtype": "PCIe", 00:16:19.738 "traddr": "0000:00:11.0" 00:16:19.738 }, 00:16:19.738 "ctrlr_data": { 00:16:19.738 "cntlid": 0, 00:16:19.738 "vendor_id": "0x1b36", 00:16:19.738 "model_number": "QEMU NVMe Ctrl", 00:16:19.738 "serial_number": "12341", 00:16:19.738 "firmware_revision": "8.0.0", 00:16:19.738 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:19.738 "oacs": { 00:16:19.738 "security": 0, 00:16:19.738 "format": 1, 00:16:19.738 "firmware": 0, 00:16:19.738 "ns_manage": 1 00:16:19.738 }, 00:16:19.738 "multi_ctrlr": false, 00:16:19.738 "ana_reporting": false 00:16:19.738 }, 00:16:19.738 "vs": { 00:16:19.738 "nvme_version": "1.4" 00:16:19.738 }, 00:16:19.738 "ns_data": { 00:16:19.738 "id": 1, 00:16:19.738 "can_share": false 00:16:19.738 } 00:16:19.738 } 00:16:19.738 ], 00:16:19.738 "mp_policy": "active_passive" 00:16:19.738 } 00:16:19.738 } 00:16:19.738 ]' 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:19.738 13:15:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=05fac93a-62fc-4ced-b721-04f7abab40c3 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 05fac93a-62fc-4ced-b721-04f7abab40c3 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:19.738 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:20.304 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:20.304 { 00:16:20.304 "name": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:20.304 "aliases": [ 00:16:20.304 "lvs/nvme0n1p0" 00:16:20.304 ], 00:16:20.304 "product_name": "Logical Volume", 00:16:20.304 "block_size": 4096, 00:16:20.304 "num_blocks": 26476544, 00:16:20.304 "uuid": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:20.304 "assigned_rate_limits": { 00:16:20.304 "rw_ios_per_sec": 0, 00:16:20.304 "rw_mbytes_per_sec": 0, 00:16:20.304 "r_mbytes_per_sec": 0, 00:16:20.304 "w_mbytes_per_sec": 0 00:16:20.304 }, 00:16:20.304 "claimed": false, 00:16:20.304 "zoned": false, 00:16:20.304 "supported_io_types": { 00:16:20.304 "read": true, 00:16:20.304 "write": true, 00:16:20.304 "unmap": true, 00:16:20.304 "write_zeroes": true, 00:16:20.304 "flush": false, 00:16:20.304 "reset": true, 00:16:20.304 "compare": false, 00:16:20.304 "compare_and_write": false, 00:16:20.304 "abort": false, 00:16:20.304 "nvme_admin": false, 00:16:20.304 "nvme_io": false 00:16:20.304 }, 00:16:20.304 "driver_specific": { 00:16:20.304 "lvol": { 00:16:20.304 "lvol_store_uuid": "05fac93a-62fc-4ced-b721-04f7abab40c3", 00:16:20.304 "base_bdev": "nvme0n1", 00:16:20.304 "thin_provision": true, 00:16:20.304 "num_allocated_clusters": 0, 00:16:20.304 "snapshot": false, 00:16:20.304 "clone": false, 00:16:20.305 "esnap_clone": false 00:16:20.305 } 00:16:20.305 } 00:16:20.305 } 00:16:20.305 ]' 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:20.305 13:15:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:20.562 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:20.820 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:20.820 { 00:16:20.820 "name": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:20.820 "aliases": [ 00:16:20.820 "lvs/nvme0n1p0" 00:16:20.820 ], 00:16:20.820 "product_name": "Logical Volume", 00:16:20.820 "block_size": 4096, 00:16:20.820 "num_blocks": 26476544, 00:16:20.820 "uuid": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:20.820 "assigned_rate_limits": { 00:16:20.820 "rw_ios_per_sec": 0, 00:16:20.820 "rw_mbytes_per_sec": 0, 00:16:20.820 "r_mbytes_per_sec": 0, 00:16:20.820 "w_mbytes_per_sec": 0 00:16:20.820 }, 00:16:20.820 "claimed": false, 00:16:20.820 "zoned": false, 00:16:20.820 "supported_io_types": { 00:16:20.820 "read": true, 00:16:20.820 "write": true, 00:16:20.820 "unmap": true, 00:16:20.820 "write_zeroes": true, 00:16:20.820 "flush": false, 00:16:20.821 "reset": true, 00:16:20.821 "compare": false, 00:16:20.821 "compare_and_write": false, 00:16:20.821 "abort": false, 00:16:20.821 "nvme_admin": false, 00:16:20.821 "nvme_io": false 00:16:20.821 }, 00:16:20.821 "driver_specific": { 00:16:20.821 "lvol": { 00:16:20.821 "lvol_store_uuid": "05fac93a-62fc-4ced-b721-04f7abab40c3", 00:16:20.821 "base_bdev": "nvme0n1", 00:16:20.821 "thin_provision": true, 00:16:20.821 "num_allocated_clusters": 0, 00:16:20.821 "snapshot": false, 00:16:20.821 "clone": false, 00:16:20.821 "esnap_clone": false 00:16:20.821 } 00:16:20.821 } 00:16:20.821 } 00:16:20.821 ]' 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:20.821 13:15:17 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:21.079 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:21.079 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5415b58-db9d-4178-9843-66d30c8a3cd4 00:16:21.337 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.337 { 00:16:21.337 "name": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:21.337 "aliases": [ 00:16:21.337 "lvs/nvme0n1p0" 00:16:21.337 ], 00:16:21.337 "product_name": "Logical Volume", 00:16:21.337 "block_size": 4096, 00:16:21.337 "num_blocks": 26476544, 00:16:21.337 "uuid": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:21.337 "assigned_rate_limits": { 00:16:21.337 "rw_ios_per_sec": 0, 00:16:21.337 "rw_mbytes_per_sec": 0, 00:16:21.337 "r_mbytes_per_sec": 0, 00:16:21.337 "w_mbytes_per_sec": 0 00:16:21.337 }, 00:16:21.337 "claimed": false, 00:16:21.337 "zoned": false, 00:16:21.337 "supported_io_types": { 00:16:21.337 "read": true, 00:16:21.337 "write": true, 00:16:21.337 "unmap": true, 00:16:21.337 "write_zeroes": true, 00:16:21.337 "flush": false, 00:16:21.337 "reset": true, 00:16:21.337 "compare": false, 00:16:21.337 "compare_and_write": false, 00:16:21.337 "abort": false, 00:16:21.337 "nvme_admin": false, 00:16:21.337 "nvme_io": false 00:16:21.337 }, 00:16:21.337 "driver_specific": { 00:16:21.337 "lvol": { 00:16:21.338 "lvol_store_uuid": "05fac93a-62fc-4ced-b721-04f7abab40c3", 00:16:21.338 "base_bdev": "nvme0n1", 00:16:21.338 "thin_provision": true, 00:16:21.338 "num_allocated_clusters": 0, 00:16:21.338 "snapshot": false, 00:16:21.338 "clone": false, 00:16:21.338 "esnap_clone": false 00:16:21.338 } 00:16:21.338 } 00:16:21.338 } 00:16:21.338 ]' 00:16:21.338 13:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:21.338 13:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e5415b58-db9d-4178-9843-66d30c8a3cd4 -c nvc0n1p0 --l2p_dram_limit 60 00:16:21.596 [2024-07-15 13:15:18.298742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.298817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.596 [2024-07-15 13:15:18.298844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:21.596 [2024-07-15 13:15:18.298858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.298975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.299009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.596 [2024-07-15 13:15:18.299060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:21.596 [2024-07-15 13:15:18.299084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.299161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.596 [2024-07-15 13:15:18.299560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.596 [2024-07-15 13:15:18.299591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.299604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.596 [2024-07-15 13:15:18.299621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:16:21.596 [2024-07-15 13:15:18.299649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.299836] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 23898c8b-df41-4434-b657-b5c7c0ca3d6d 00:16:21.596 [2024-07-15 13:15:18.301843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.301883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:21.596 [2024-07-15 13:15:18.301917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:21.596 [2024-07-15 13:15:18.301933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.311728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.311818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.596 [2024-07-15 13:15:18.311862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.703 ms 00:16:21.596 [2024-07-15 13:15:18.311879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.312058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.312090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.596 [2024-07-15 13:15:18.312106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:21.596 [2024-07-15 13:15:18.312123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.312263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.312294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.596 [2024-07-15 13:15:18.312310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:21.596 [2024-07-15 13:15:18.312325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.312377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.596 [2024-07-15 13:15:18.314654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.314704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.596 [2024-07-15 13:15:18.314726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:16:21.596 [2024-07-15 13:15:18.314739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.314806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.314829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.596 [2024-07-15 13:15:18.314846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:21.596 [2024-07-15 13:15:18.314858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.314906] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:21.596 [2024-07-15 13:15:18.315132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:21.596 [2024-07-15 13:15:18.315185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.596 [2024-07-15 13:15:18.315205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:21.596 [2024-07-15 13:15:18.315225] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.596 [2024-07-15 13:15:18.315240] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.596 [2024-07-15 13:15:18.315259] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:21.596 [2024-07-15 13:15:18.315271] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.596 [2024-07-15 13:15:18.315285] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:21.596 [2024-07-15 13:15:18.315298] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:21.596 [2024-07-15 13:15:18.315314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.315326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.596 [2024-07-15 13:15:18.315342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:16:21.596 [2024-07-15 13:15:18.315353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.315465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.596 [2024-07-15 13:15:18.315481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.596 [2024-07-15 13:15:18.315503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:21.596 [2024-07-15 13:15:18.315530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.596 [2024-07-15 13:15:18.315660] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.596 [2024-07-15 13:15:18.315678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.596 [2024-07-15 13:15:18.315709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.596 [2024-07-15 13:15:18.315722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.596 [2024-07-15 13:15:18.315737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.596 [2024-07-15 13:15:18.315748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.596 [2024-07-15 13:15:18.315762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:21.596 [2024-07-15 13:15:18.315774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.596 [2024-07-15 13:15:18.315787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:21.596 [2024-07-15 13:15:18.315798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.596 [2024-07-15 13:15:18.315812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.596 [2024-07-15 13:15:18.315824] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:21.596 [2024-07-15 13:15:18.315838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.596 [2024-07-15 13:15:18.315849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.596 [2024-07-15 13:15:18.315865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:21.597 [2024-07-15 13:15:18.315876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.597 [2024-07-15 13:15:18.315890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.597 [2024-07-15 13:15:18.315900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:21.597 [2024-07-15 13:15:18.315914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.597 [2024-07-15 13:15:18.315925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.597 [2024-07-15 13:15:18.315939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:21.597 [2024-07-15 13:15:18.315950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.597 [2024-07-15 13:15:18.315963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.597 [2024-07-15 13:15:18.315974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:21.597 [2024-07-15 13:15:18.315987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.597 [2024-07-15 13:15:18.315998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.597 [2024-07-15 13:15:18.316013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.597 [2024-07-15 13:15:18.316038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.597 [2024-07-15 13:15:18.316049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.597 [2024-07-15 13:15:18.316085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.597 [2024-07-15 13:15:18.316098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.597 [2024-07-15 13:15:18.316123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.597 [2024-07-15 13:15:18.316134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:21.597 [2024-07-15 13:15:18.316164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.597 [2024-07-15 13:15:18.316178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:21.597 [2024-07-15 13:15:18.316193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:21.597 [2024-07-15 13:15:18.316211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:21.597 [2024-07-15 13:15:18.316237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:21.597 [2024-07-15 13:15:18.316250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316262] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.597 [2024-07-15 13:15:18.316277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.597 [2024-07-15 13:15:18.316290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.597 [2024-07-15 13:15:18.316323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.597 [2024-07-15 13:15:18.316339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.597 [2024-07-15 13:15:18.316355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.597 [2024-07-15 13:15:18.316367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.597 [2024-07-15 13:15:18.316381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.597 [2024-07-15 13:15:18.316393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.597 [2024-07-15 13:15:18.316407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.597 [2024-07-15 13:15:18.316423] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.597 [2024-07-15 13:15:18.316441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:21.597 [2024-07-15 13:15:18.316469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:21.597 [2024-07-15 13:15:18.316481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:21.597 [2024-07-15 13:15:18.316495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:21.597 [2024-07-15 13:15:18.316507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:21.597 [2024-07-15 13:15:18.316521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:21.597 [2024-07-15 13:15:18.316533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:21.597 [2024-07-15 13:15:18.316550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:21.597 [2024-07-15 13:15:18.316562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:21.597 [2024-07-15 13:15:18.316576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:21.597 [2024-07-15 13:15:18.316641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.597 [2024-07-15 13:15:18.316656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.597 [2024-07-15 13:15:18.316690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.597 [2024-07-15 13:15:18.316702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.597 [2024-07-15 13:15:18.316718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.597 [2024-07-15 13:15:18.316731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.597 [2024-07-15 13:15:18.316746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.597 [2024-07-15 13:15:18.316758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:16:21.597 [2024-07-15 13:15:18.316776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.597 [2024-07-15 13:15:18.316892] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:21.597 [2024-07-15 13:15:18.316916] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:24.915 [2024-07-15 13:15:21.186137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.186229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:24.915 [2024-07-15 13:15:21.186254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2869.250 ms 00:16:24.915 [2024-07-15 13:15:21.186273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.201039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.201112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.915 [2024-07-15 13:15:21.201135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.637 ms 00:16:24.915 [2024-07-15 13:15:21.201179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.201389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.201424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.915 [2024-07-15 13:15:21.201440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:24.915 [2024-07-15 13:15:21.201454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.223842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.223915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.915 [2024-07-15 13:15:21.223943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.318 ms 00:16:24.915 [2024-07-15 13:15:21.223959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.224035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.224057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.915 [2024-07-15 13:15:21.224072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.915 [2024-07-15 13:15:21.224087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.224771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.224809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.915 [2024-07-15 13:15:21.224826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:24.915 [2024-07-15 13:15:21.224841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.225039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.225070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.915 [2024-07-15 13:15:21.225085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:24.915 [2024-07-15 13:15:21.225098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.235932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.236231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.915 [2024-07-15 13:15:21.236414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.794 ms 00:16:24.915 [2024-07-15 13:15:21.236502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.247542] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.915 [2024-07-15 13:15:21.269711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.270106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.915 [2024-07-15 13:15:21.270291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.820 ms 00:16:24.915 [2024-07-15 13:15:21.270358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.326124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.326462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:24.915 [2024-07-15 13:15:21.326630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.561 ms 00:16:24.915 [2024-07-15 13:15:21.326780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.327236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.327380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.915 [2024-07-15 13:15:21.327514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:16:24.915 [2024-07-15 13:15:21.327577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.331545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.331700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:24.915 [2024-07-15 13:15:21.331843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:16:24.915 [2024-07-15 13:15:21.331973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.335286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.335325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:24.915 [2024-07-15 13:15:21.335348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:16:24.915 [2024-07-15 13:15:21.335361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.335851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.335877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.915 [2024-07-15 13:15:21.335896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:16:24.915 [2024-07-15 13:15:21.335909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.369501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.369786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.915 [2024-07-15 13:15:21.369969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.532 ms 00:16:24.915 [2024-07-15 13:15:21.370113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.375497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.375665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.915 [2024-07-15 13:15:21.375810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.189 ms 00:16:24.915 [2024-07-15 13:15:21.375874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.379815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.379974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:24.915 [2024-07-15 13:15:21.380106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:16:24.915 [2024-07-15 13:15:21.380254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.384646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.384806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.915 [2024-07-15 13:15:21.384941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:16:24.915 [2024-07-15 13:15:21.385061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.385211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.385283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.915 [2024-07-15 13:15:21.385411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:24.915 [2024-07-15 13:15:21.385474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.385653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.915 [2024-07-15 13:15:21.385718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.915 [2024-07-15 13:15:21.385846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:24.915 [2024-07-15 13:15:21.385978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.915 [2024-07-15 13:15:21.387498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3088.201 ms, result 0 00:16:24.915 { 00:16:24.915 "name": "ftl0", 00:16:24.915 "uuid": "23898c8b-df41-4434-b657-b5c7c0ca3d6d" 00:16:24.915 } 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:24.915 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:25.174 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:25.433 [ 00:16:25.433 { 00:16:25.433 "name": "ftl0", 00:16:25.433 "aliases": [ 00:16:25.433 "23898c8b-df41-4434-b657-b5c7c0ca3d6d" 00:16:25.433 ], 00:16:25.433 "product_name": "FTL disk", 00:16:25.433 "block_size": 4096, 00:16:25.433 "num_blocks": 20971520, 00:16:25.433 "uuid": "23898c8b-df41-4434-b657-b5c7c0ca3d6d", 00:16:25.433 "assigned_rate_limits": { 00:16:25.433 "rw_ios_per_sec": 0, 00:16:25.433 "rw_mbytes_per_sec": 0, 00:16:25.433 "r_mbytes_per_sec": 0, 00:16:25.433 "w_mbytes_per_sec": 0 00:16:25.433 }, 00:16:25.433 "claimed": false, 00:16:25.433 "zoned": false, 00:16:25.433 "supported_io_types": { 00:16:25.433 "read": true, 00:16:25.433 "write": true, 00:16:25.433 "unmap": true, 00:16:25.433 "write_zeroes": true, 00:16:25.433 "flush": true, 00:16:25.433 "reset": false, 00:16:25.433 "compare": false, 00:16:25.433 "compare_and_write": false, 00:16:25.433 "abort": false, 00:16:25.433 "nvme_admin": false, 00:16:25.433 "nvme_io": false 00:16:25.433 }, 00:16:25.433 "driver_specific": { 00:16:25.433 "ftl": { 00:16:25.433 "base_bdev": "e5415b58-db9d-4178-9843-66d30c8a3cd4", 00:16:25.433 "cache": "nvc0n1p0" 00:16:25.433 } 00:16:25.433 } 00:16:25.433 } 00:16:25.433 ] 00:16:25.433 13:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:16:25.433 13:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:25.433 13:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:25.693 13:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:25.693 13:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:25.693 [2024-07-15 13:15:22.401115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.401219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:25.693 [2024-07-15 13:15:22.401255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:25.693 [2024-07-15 13:15:22.401285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.693 [2024-07-15 13:15:22.401337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.693 [2024-07-15 13:15:22.402245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.402272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:25.693 [2024-07-15 13:15:22.402294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:16:25.693 [2024-07-15 13:15:22.402309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.693 [2024-07-15 13:15:22.402809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.402839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:25.693 [2024-07-15 13:15:22.402858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:16:25.693 [2024-07-15 13:15:22.402870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.693 [2024-07-15 13:15:22.406079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.406110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:25.693 [2024-07-15 13:15:22.406130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:16:25.693 [2024-07-15 13:15:22.406152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.693 [2024-07-15 13:15:22.412695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.412731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:25.693 [2024-07-15 13:15:22.412750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.494 ms 00:16:25.693 [2024-07-15 13:15:22.412762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.693 [2024-07-15 13:15:22.414848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.693 [2024-07-15 13:15:22.414891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:25.694 [2024-07-15 13:15:22.414914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:16:25.694 [2024-07-15 13:15:22.414925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.419468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.419517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:25.694 [2024-07-15 13:15:22.419539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.483 ms 00:16:25.694 [2024-07-15 13:15:22.419556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.419752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.419776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:25.694 [2024-07-15 13:15:22.419795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:25.694 [2024-07-15 13:15:22.419807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.421696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.421733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:25.694 [2024-07-15 13:15:22.421753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:16:25.694 [2024-07-15 13:15:22.421765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.423293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.423328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:25.694 [2024-07-15 13:15:22.423349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:16:25.694 [2024-07-15 13:15:22.423361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.424686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.424717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:25.694 [2024-07-15 13:15:22.424734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:16:25.694 [2024-07-15 13:15:22.424746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.426096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.694 [2024-07-15 13:15:22.426159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:25.694 [2024-07-15 13:15:22.426190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:16:25.694 [2024-07-15 13:15:22.426211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.694 [2024-07-15 13:15:22.426286] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:25.694 [2024-07-15 13:15:22.426346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.426997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.427983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:25.694 [2024-07-15 13:15:22.428954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.428968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.428980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.428995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:25.695 [2024-07-15 13:15:22.429356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:25.695 [2024-07-15 13:15:22.429374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23898c8b-df41-4434-b657-b5c7c0ca3d6d 00:16:25.695 [2024-07-15 13:15:22.429388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:25.695 [2024-07-15 13:15:22.429406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:25.695 [2024-07-15 13:15:22.429418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:25.695 [2024-07-15 13:15:22.429433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:25.695 [2024-07-15 13:15:22.429445] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:25.695 [2024-07-15 13:15:22.429460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:25.695 [2024-07-15 13:15:22.429472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:25.695 [2024-07-15 13:15:22.429485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:25.695 [2024-07-15 13:15:22.429496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:25.695 [2024-07-15 13:15:22.429512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.695 [2024-07-15 13:15:22.429527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:25.695 [2024-07-15 13:15:22.429544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:16:25.695 [2024-07-15 13:15:22.429556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.432040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.954 [2024-07-15 13:15:22.432080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:25.954 [2024-07-15 13:15:22.432105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:16:25.954 [2024-07-15 13:15:22.432118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.432325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.954 [2024-07-15 13:15:22.432346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:25.954 [2024-07-15 13:15:22.432365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:25.954 [2024-07-15 13:15:22.432377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.440942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.441013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.954 [2024-07-15 13:15:22.441039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.441053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.441189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.441211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.954 [2024-07-15 13:15:22.441228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.441240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.441411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.441434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.954 [2024-07-15 13:15:22.441454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.441466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.441526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.441542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.954 [2024-07-15 13:15:22.441557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.441569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.459355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.459436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.954 [2024-07-15 13:15:22.459463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.459478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.469906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.469980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.954 [2024-07-15 13:15:22.470006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.470229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.470264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.954 [2024-07-15 13:15:22.470285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.470408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.470429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.954 [2024-07-15 13:15:22.470444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.470579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.470602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.954 [2024-07-15 13:15:22.470639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.470745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.470764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.954 [2024-07-15 13:15:22.470780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.470866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.470882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.954 [2024-07-15 13:15:22.470922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.470934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.471005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.954 [2024-07-15 13:15:22.471023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.954 [2024-07-15 13:15:22.471038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.954 [2024-07-15 13:15:22.471050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.954 [2024-07-15 13:15:22.471285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.139 ms, result 0 00:16:25.954 true 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88293 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 88293 ']' 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 88293 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88293 00:16:25.954 killing process with pid 88293 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88293' 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 88293 00:16:25.954 13:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 88293 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:29.238 13:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:29.238 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:29.238 fio-3.35 00:16:29.238 Starting 1 thread 00:16:34.550 00:16:34.550 test: (groupid=0, jobs=1): err= 0: pid=88478: Mon Jul 15 13:15:30 2024 00:16:34.550 read: IOPS=943, BW=62.7MiB/s (65.7MB/s)(255MiB/4061msec) 00:16:34.550 slat (nsec): min=6002, max=31309, avg=7576.33, stdev=2612.95 00:16:34.550 clat (usec): min=323, max=746, avg=465.93, stdev=46.15 00:16:34.550 lat (usec): min=339, max=753, avg=473.51, stdev=46.86 00:16:34.550 clat percentiles (usec): 00:16:34.550 | 1.00th=[ 375], 5.00th=[ 383], 10.00th=[ 412], 20.00th=[ 445], 00:16:34.550 | 30.00th=[ 449], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:16:34.550 | 70.00th=[ 469], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 545], 00:16:34.550 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 693], 00:16:34.550 | 99.99th=[ 750] 00:16:34.550 write: IOPS=950, BW=63.1MiB/s (66.2MB/s)(256MiB/4057msec); 0 zone resets 00:16:34.550 slat (nsec): min=19610, max=98703, avg=23912.20, stdev=5328.44 00:16:34.550 clat (usec): min=383, max=34314, avg=544.61, stdev=547.25 00:16:34.550 lat (usec): min=407, max=34335, avg=568.52, stdev=547.26 00:16:34.550 clat percentiles (usec): 00:16:34.550 | 1.00th=[ 420], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 478], 00:16:34.550 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 545], 00:16:34.550 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 627], 00:16:34.550 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 906], 00:16:34.550 | 99.99th=[34341] 00:16:34.550 bw ( KiB/s): min=60112, max=66776, per=99.95%, avg=64600.00, stdev=2006.70, samples=8 00:16:34.550 iops : min= 884, max= 982, avg=950.00, stdev=29.51, samples=8 00:16:34.550 lat (usec) : 500=55.10%, 750=44.27%, 1000=0.61% 00:16:34.550 lat (msec) : 50=0.01% 00:16:34.550 cpu : usr=99.14%, sys=0.15%, ctx=4, majf=0, minf=1326 00:16:34.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.551 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.551 00:16:34.551 Run status group 0 (all jobs): 00:16:34.551 READ: bw=62.7MiB/s (65.7MB/s), 62.7MiB/s-62.7MiB/s (65.7MB/s-65.7MB/s), io=255MiB (267MB), run=4061-4061msec 00:16:34.551 WRITE: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=256MiB (269MB), run=4057-4057msec 00:16:34.808 ----------------------------------------------------- 00:16:34.808 Suppressions used: 00:16:34.808 count bytes template 00:16:34.808 1 5 /usr/src/fio/parse.c 00:16:34.808 1 8 libtcmalloc_minimal.so 00:16:34.808 1 904 libcrypto.so 00:16:34.808 ----------------------------------------------------- 00:16:34.808 00:16:34.808 13:15:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:34.808 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.808 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:35.066 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:35.067 13:15:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:35.067 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:35.067 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:35.067 fio-3.35 00:16:35.067 Starting 2 threads 00:17:07.154 00:17:07.154 first_half: (groupid=0, jobs=1): err= 0: pid=88564: Mon Jul 15 13:16:00 2024 00:17:07.154 read: IOPS=2338, BW=9356KiB/s (9580kB/s)(256MiB/27994msec) 00:17:07.154 slat (usec): min=4, max=136, avg= 7.57, stdev= 1.92 00:17:07.154 clat (usec): min=755, max=311032, avg=46346.48, stdev=28379.70 00:17:07.154 lat (usec): min=761, max=311041, avg=46354.06, stdev=28379.95 00:17:07.154 clat percentiles (msec): 00:17:07.154 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:17:07.154 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:17:07.154 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 90], 00:17:07.154 | 99.00th=[ 197], 99.50th=[ 209], 99.90th=[ 245], 99.95th=[ 266], 00:17:07.154 | 99.99th=[ 305] 00:17:07.154 write: IOPS=2345, BW=9380KiB/s (9606kB/s)(256MiB/27946msec); 0 zone resets 00:17:07.154 slat (usec): min=6, max=480, avg= 8.96, stdev= 5.26 00:17:07.154 clat (usec): min=468, max=57143, avg=8333.36, stdev=8234.25 00:17:07.154 lat (usec): min=486, max=57154, avg=8342.32, stdev=8234.40 00:17:07.154 clat percentiles (usec): 00:17:07.154 | 1.00th=[ 1074], 5.00th=[ 1516], 10.00th=[ 1909], 20.00th=[ 3458], 00:17:07.154 | 30.00th=[ 4490], 40.00th=[ 5669], 50.00th=[ 6456], 60.00th=[ 7373], 00:17:07.154 | 70.00th=[ 8029], 80.00th=[ 9634], 90.00th=[15533], 95.00th=[22676], 00:17:07.154 | 99.00th=[45351], 99.50th=[49546], 99.90th=[54789], 99.95th=[55837], 00:17:07.154 | 99.99th=[56361] 00:17:07.154 bw ( KiB/s): min= 584, max=41720, per=100.00%, avg=22653.22, stdev=12283.36, samples=23 00:17:07.154 iops : min= 146, max=10430, avg=5663.30, stdev=3070.84, samples=23 00:17:07.154 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.28% 00:17:07.154 lat (msec) : 2=5.15%, 4=7.19%, 10=27.87%, 20=8.16%, 50=45.33% 00:17:07.154 lat (msec) : 100=3.69%, 250=2.25%, 500=0.04% 00:17:07.154 cpu : usr=99.08%, sys=0.15%, ctx=53, majf=0, minf=5597 00:17:07.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:07.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.154 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.154 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.154 second_half: (groupid=0, jobs=1): err= 0: pid=88565: Mon Jul 15 13:16:00 2024 00:17:07.154 read: IOPS=2359, BW=9437KiB/s (9663kB/s)(256MiB/27759msec) 00:17:07.154 slat (nsec): min=4716, max=36892, avg=7656.02, stdev=1723.87 00:17:07.154 clat (msec): min=12, max=228, avg=46.81, stdev=25.95 00:17:07.154 lat (msec): min=12, max=228, avg=46.82, stdev=25.95 00:17:07.154 clat percentiles (msec): 00:17:07.154 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:17:07.154 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:17:07.154 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 54], 95.00th=[ 83], 00:17:07.154 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 220], 99.95th=[ 222], 00:17:07.154 | 99.99th=[ 226] 00:17:07.154 write: IOPS=2375, BW=9500KiB/s (9728kB/s)(256MiB/27594msec); 0 zone resets 00:17:07.154 slat (usec): min=6, max=650, avg= 8.85, stdev= 4.98 00:17:07.154 clat (usec): min=469, max=46921, avg=7415.09, stdev=4715.65 00:17:07.154 lat (usec): min=484, max=46928, avg=7423.94, stdev=4715.86 00:17:07.154 clat percentiles (usec): 00:17:07.154 | 1.00th=[ 1336], 5.00th=[ 2245], 10.00th=[ 3097], 20.00th=[ 4047], 00:17:07.154 | 30.00th=[ 5080], 40.00th=[ 5800], 50.00th=[ 6456], 60.00th=[ 7177], 00:17:07.154 | 70.00th=[ 7767], 80.00th=[ 9372], 90.00th=[13698], 95.00th=[15926], 00:17:07.154 | 99.00th=[24249], 99.50th=[34341], 99.90th=[43779], 99.95th=[45351], 00:17:07.154 | 99.99th=[45876] 00:17:07.154 bw ( KiB/s): min= 832, max=41464, per=100.00%, avg=23663.27, stdev=15069.50, samples=22 00:17:07.154 iops : min= 208, max=10366, avg=5915.82, stdev=3767.37, samples=22 00:17:07.154 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.13% 00:17:07.154 lat (msec) : 2=1.80%, 4=7.68%, 10=31.17%, 20=8.60%, 50=44.18% 00:17:07.154 lat (msec) : 100=4.29%, 250=2.09% 00:17:07.155 cpu : usr=99.15%, sys=0.18%, ctx=46, majf=0, minf=5541 00:17:07.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:07.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.155 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.155 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.155 00:17:07.155 Run status group 0 (all jobs): 00:17:07.155 READ: bw=18.3MiB/s (19.2MB/s), 9356KiB/s-9437KiB/s (9580kB/s-9663kB/s), io=512MiB (536MB), run=27759-27994msec 00:17:07.155 WRITE: bw=18.3MiB/s (19.2MB/s), 9380KiB/s-9500KiB/s (9606kB/s-9728kB/s), io=512MiB (537MB), run=27594-27946msec 00:17:07.155 ----------------------------------------------------- 00:17:07.155 Suppressions used: 00:17:07.155 count bytes template 00:17:07.155 2 10 /usr/src/fio/parse.c 00:17:07.155 2 192 /usr/src/fio/iolog.c 00:17:07.155 1 8 libtcmalloc_minimal.so 00:17:07.155 1 904 libcrypto.so 00:17:07.155 ----------------------------------------------------- 00:17:07.155 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:07.155 13:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:07.155 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:07.155 fio-3.35 00:17:07.155 Starting 1 thread 00:17:22.084 00:17:22.084 test: (groupid=0, jobs=1): err= 0: pid=88906: Mon Jul 15 13:16:18 2024 00:17:22.084 read: IOPS=6478, BW=25.3MiB/s (26.5MB/s)(255MiB/10065msec) 00:17:22.084 slat (usec): min=4, max=101, avg= 6.87, stdev= 1.77 00:17:22.084 clat (usec): min=797, max=37291, avg=19746.87, stdev=1376.05 00:17:22.084 lat (usec): min=802, max=37299, avg=19753.75, stdev=1376.07 00:17:22.084 clat percentiles (usec): 00:17:22.084 | 1.00th=[18482], 5.00th=[18744], 10.00th=[18744], 20.00th=[19006], 00:17:22.084 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:17:22.084 | 70.00th=[19792], 80.00th=[20055], 90.00th=[21890], 95.00th=[22414], 00:17:22.084 | 99.00th=[23462], 99.50th=[24511], 99.90th=[27919], 99.95th=[32900], 00:17:22.084 | 99.99th=[36439] 00:17:22.084 write: IOPS=11.6k, BW=45.3MiB/s (47.6MB/s)(256MiB/5645msec); 0 zone resets 00:17:22.084 slat (usec): min=6, max=640, avg= 9.58, stdev= 5.94 00:17:22.084 clat (usec): min=642, max=60754, avg=10958.25, stdev=14139.24 00:17:22.084 lat (usec): min=650, max=60763, avg=10967.83, stdev=14139.30 00:17:22.084 clat percentiles (usec): 00:17:22.084 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1270], 20.00th=[ 1483], 00:17:22.084 | 30.00th=[ 1696], 40.00th=[ 2212], 50.00th=[ 6980], 60.00th=[ 7963], 00:17:22.084 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[40633], 95.00th=[44827], 00:17:22.084 | 99.00th=[49021], 99.50th=[51119], 99.90th=[55837], 99.95th=[57410], 00:17:22.084 | 99.99th=[60031] 00:17:22.084 bw ( KiB/s): min=11056, max=65160, per=94.06%, avg=43682.17, stdev=14340.04, samples=12 00:17:22.084 iops : min= 2764, max=16290, avg=10920.50, stdev=3584.99, samples=12 00:17:22.084 lat (usec) : 750=0.02%, 1000=0.83% 00:17:22.084 lat (msec) : 2=18.12%, 4=2.01%, 10=16.98%, 20=43.63%, 50=18.06% 00:17:22.084 lat (msec) : 100=0.35% 00:17:22.084 cpu : usr=98.85%, sys=0.29%, ctx=26, majf=0, minf=5577 00:17:22.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:22.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.084 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.084 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.084 00:17:22.084 Run status group 0 (all jobs): 00:17:22.084 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10065-10065msec 00:17:22.084 WRITE: bw=45.3MiB/s (47.6MB/s), 45.3MiB/s-45.3MiB/s (47.6MB/s-47.6MB/s), io=256MiB (268MB), run=5645-5645msec 00:17:23.018 ----------------------------------------------------- 00:17:23.018 Suppressions used: 00:17:23.018 count bytes template 00:17:23.018 1 5 /usr/src/fio/parse.c 00:17:23.018 2 192 /usr/src/fio/iolog.c 00:17:23.018 1 8 libtcmalloc_minimal.so 00:17:23.018 1 904 libcrypto.so 00:17:23.018 ----------------------------------------------------- 00:17:23.018 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:23.018 Remove shared memory files 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid74180 /dev/shm/spdk_tgt_trace.pid87267 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:23.018 ************************************ 00:17:23.018 END TEST ftl_fio_basic 00:17:23.018 ************************************ 00:17:23.018 00:17:23.018 real 1m5.826s 00:17:23.018 user 2m27.814s 00:17:23.018 sys 0m3.830s 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:23.018 13:16:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:23.018 13:16:19 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:23.018 13:16:19 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:23.018 13:16:19 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:23.018 13:16:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:23.018 ************************************ 00:17:23.018 START TEST ftl_bdevperf 00:17:23.018 ************************************ 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:23.018 * Looking for test storage... 00:17:23.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=89158 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 89158 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 89158 ']' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:23.018 13:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.276 [2024-07-15 13:16:19.829671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:23.276 [2024-07-15 13:16:19.829877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89158 ] 00:17:23.276 [2024-07-15 13:16:19.978318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.534 [2024-07-15 13:16:20.075777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:24.099 13:16:20 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:24.664 { 00:17:24.664 "name": "nvme0n1", 00:17:24.664 "aliases": [ 00:17:24.664 "f94844a6-f565-492f-ab53-e07d50980c6e" 00:17:24.664 ], 00:17:24.664 "product_name": "NVMe disk", 00:17:24.664 "block_size": 4096, 00:17:24.664 "num_blocks": 1310720, 00:17:24.664 "uuid": "f94844a6-f565-492f-ab53-e07d50980c6e", 00:17:24.664 "assigned_rate_limits": { 00:17:24.664 "rw_ios_per_sec": 0, 00:17:24.664 "rw_mbytes_per_sec": 0, 00:17:24.664 "r_mbytes_per_sec": 0, 00:17:24.664 "w_mbytes_per_sec": 0 00:17:24.664 }, 00:17:24.664 "claimed": true, 00:17:24.664 "claim_type": "read_many_write_one", 00:17:24.664 "zoned": false, 00:17:24.664 "supported_io_types": { 00:17:24.664 "read": true, 00:17:24.664 "write": true, 00:17:24.664 "unmap": true, 00:17:24.664 "write_zeroes": true, 00:17:24.664 "flush": true, 00:17:24.664 "reset": true, 00:17:24.664 "compare": true, 00:17:24.664 "compare_and_write": false, 00:17:24.664 "abort": true, 00:17:24.664 "nvme_admin": true, 00:17:24.664 "nvme_io": true 00:17:24.664 }, 00:17:24.664 "driver_specific": { 00:17:24.664 "nvme": [ 00:17:24.664 { 00:17:24.664 "pci_address": "0000:00:11.0", 00:17:24.664 "trid": { 00:17:24.664 "trtype": "PCIe", 00:17:24.664 "traddr": "0000:00:11.0" 00:17:24.664 }, 00:17:24.664 "ctrlr_data": { 00:17:24.664 "cntlid": 0, 00:17:24.664 "vendor_id": "0x1b36", 00:17:24.664 "model_number": "QEMU NVMe Ctrl", 00:17:24.664 "serial_number": "12341", 00:17:24.664 "firmware_revision": "8.0.0", 00:17:24.664 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:24.664 "oacs": { 00:17:24.664 "security": 0, 00:17:24.664 "format": 1, 00:17:24.664 "firmware": 0, 00:17:24.664 "ns_manage": 1 00:17:24.664 }, 00:17:24.664 "multi_ctrlr": false, 00:17:24.664 "ana_reporting": false 00:17:24.664 }, 00:17:24.664 "vs": { 00:17:24.664 "nvme_version": "1.4" 00:17:24.664 }, 00:17:24.664 "ns_data": { 00:17:24.664 "id": 1, 00:17:24.664 "can_share": false 00:17:24.664 } 00:17:24.664 } 00:17:24.664 ], 00:17:24.664 "mp_policy": "active_passive" 00:17:24.664 } 00:17:24.664 } 00:17:24.664 ]' 00:17:24.664 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:24.923 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:25.180 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=05fac93a-62fc-4ced-b721-04f7abab40c3 00:17:25.180 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:25.180 13:16:21 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05fac93a-62fc-4ced-b721-04f7abab40c3 00:17:25.438 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:25.696 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=603b4d03-c66a-4151-85a8-f8ddbc340fb6 00:17:25.696 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 603b4d03-c66a-4151-85a8-f8ddbc340fb6 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:25.954 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:26.212 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:26.212 { 00:17:26.212 "name": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:26.212 "aliases": [ 00:17:26.212 "lvs/nvme0n1p0" 00:17:26.212 ], 00:17:26.212 "product_name": "Logical Volume", 00:17:26.212 "block_size": 4096, 00:17:26.212 "num_blocks": 26476544, 00:17:26.212 "uuid": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:26.212 "assigned_rate_limits": { 00:17:26.212 "rw_ios_per_sec": 0, 00:17:26.212 "rw_mbytes_per_sec": 0, 00:17:26.212 "r_mbytes_per_sec": 0, 00:17:26.212 "w_mbytes_per_sec": 0 00:17:26.212 }, 00:17:26.212 "claimed": false, 00:17:26.212 "zoned": false, 00:17:26.212 "supported_io_types": { 00:17:26.212 "read": true, 00:17:26.212 "write": true, 00:17:26.212 "unmap": true, 00:17:26.212 "write_zeroes": true, 00:17:26.212 "flush": false, 00:17:26.212 "reset": true, 00:17:26.212 "compare": false, 00:17:26.212 "compare_and_write": false, 00:17:26.212 "abort": false, 00:17:26.212 "nvme_admin": false, 00:17:26.212 "nvme_io": false 00:17:26.212 }, 00:17:26.212 "driver_specific": { 00:17:26.212 "lvol": { 00:17:26.212 "lvol_store_uuid": "603b4d03-c66a-4151-85a8-f8ddbc340fb6", 00:17:26.212 "base_bdev": "nvme0n1", 00:17:26.212 "thin_provision": true, 00:17:26.212 "num_allocated_clusters": 0, 00:17:26.212 "snapshot": false, 00:17:26.212 "clone": false, 00:17:26.212 "esnap_clone": false 00:17:26.212 } 00:17:26.212 } 00:17:26.212 } 00:17:26.212 ]' 00:17:26.212 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:26.212 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:26.212 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:26.470 13:16:22 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:26.728 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:26.986 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:26.986 { 00:17:26.986 "name": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:26.986 "aliases": [ 00:17:26.986 "lvs/nvme0n1p0" 00:17:26.986 ], 00:17:26.986 "product_name": "Logical Volume", 00:17:26.986 "block_size": 4096, 00:17:26.986 "num_blocks": 26476544, 00:17:26.986 "uuid": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:26.986 "assigned_rate_limits": { 00:17:26.986 "rw_ios_per_sec": 0, 00:17:26.986 "rw_mbytes_per_sec": 0, 00:17:26.986 "r_mbytes_per_sec": 0, 00:17:26.986 "w_mbytes_per_sec": 0 00:17:26.986 }, 00:17:26.986 "claimed": false, 00:17:26.986 "zoned": false, 00:17:26.986 "supported_io_types": { 00:17:26.986 "read": true, 00:17:26.986 "write": true, 00:17:26.986 "unmap": true, 00:17:26.986 "write_zeroes": true, 00:17:26.986 "flush": false, 00:17:26.986 "reset": true, 00:17:26.986 "compare": false, 00:17:26.986 "compare_and_write": false, 00:17:26.986 "abort": false, 00:17:26.986 "nvme_admin": false, 00:17:26.987 "nvme_io": false 00:17:26.987 }, 00:17:26.987 "driver_specific": { 00:17:26.987 "lvol": { 00:17:26.987 "lvol_store_uuid": "603b4d03-c66a-4151-85a8-f8ddbc340fb6", 00:17:26.987 "base_bdev": "nvme0n1", 00:17:26.987 "thin_provision": true, 00:17:26.987 "num_allocated_clusters": 0, 00:17:26.987 "snapshot": false, 00:17:26.987 "clone": false, 00:17:26.987 "esnap_clone": false 00:17:26.987 } 00:17:26.987 } 00:17:26.987 } 00:17:26.987 ]' 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:26.987 13:16:23 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:27.257 13:16:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 13b22173-dabd-4a2e-89fd-037ea1faa908 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:27.517 { 00:17:27.517 "name": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:27.517 "aliases": [ 00:17:27.517 "lvs/nvme0n1p0" 00:17:27.517 ], 00:17:27.517 "product_name": "Logical Volume", 00:17:27.517 "block_size": 4096, 00:17:27.517 "num_blocks": 26476544, 00:17:27.517 "uuid": "13b22173-dabd-4a2e-89fd-037ea1faa908", 00:17:27.517 "assigned_rate_limits": { 00:17:27.517 "rw_ios_per_sec": 0, 00:17:27.517 "rw_mbytes_per_sec": 0, 00:17:27.517 "r_mbytes_per_sec": 0, 00:17:27.517 "w_mbytes_per_sec": 0 00:17:27.517 }, 00:17:27.517 "claimed": false, 00:17:27.517 "zoned": false, 00:17:27.517 "supported_io_types": { 00:17:27.517 "read": true, 00:17:27.517 "write": true, 00:17:27.517 "unmap": true, 00:17:27.517 "write_zeroes": true, 00:17:27.517 "flush": false, 00:17:27.517 "reset": true, 00:17:27.517 "compare": false, 00:17:27.517 "compare_and_write": false, 00:17:27.517 "abort": false, 00:17:27.517 "nvme_admin": false, 00:17:27.517 "nvme_io": false 00:17:27.517 }, 00:17:27.517 "driver_specific": { 00:17:27.517 "lvol": { 00:17:27.517 "lvol_store_uuid": "603b4d03-c66a-4151-85a8-f8ddbc340fb6", 00:17:27.517 "base_bdev": "nvme0n1", 00:17:27.517 "thin_provision": true, 00:17:27.517 "num_allocated_clusters": 0, 00:17:27.517 "snapshot": false, 00:17:27.517 "clone": false, 00:17:27.517 "esnap_clone": false 00:17:27.517 } 00:17:27.517 } 00:17:27.517 } 00:17:27.517 ]' 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:27.517 13:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 13b22173-dabd-4a2e-89fd-037ea1faa908 -c nvc0n1p0 --l2p_dram_limit 20 00:17:27.796 [2024-07-15 13:16:24.483041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.483122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.796 [2024-07-15 13:16:24.483167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:27.796 [2024-07-15 13:16:24.483189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.483353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.483392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.796 [2024-07-15 13:16:24.483406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:27.796 [2024-07-15 13:16:24.483427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.483467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.796 [2024-07-15 13:16:24.483850] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.796 [2024-07-15 13:16:24.483890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.483912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.796 [2024-07-15 13:16:24.483925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:17:27.796 [2024-07-15 13:16:24.483939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.484070] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 41f02daa-c6a6-4ce1-a78f-a36e89420cbf 00:17:27.796 [2024-07-15 13:16:24.485882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.485925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:27.796 [2024-07-15 13:16:24.485947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:27.796 [2024-07-15 13:16:24.485973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.496134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.496496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.796 [2024-07-15 13:16:24.496548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.049 ms 00:17:27.796 [2024-07-15 13:16:24.496563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.496709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.496733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.796 [2024-07-15 13:16:24.496749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:27.796 [2024-07-15 13:16:24.496769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.496893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.496912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.796 [2024-07-15 13:16:24.496928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:27.796 [2024-07-15 13:16:24.496940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.496989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.796 [2024-07-15 13:16:24.499332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.499401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.796 [2024-07-15 13:16:24.499420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:17:27.796 [2024-07-15 13:16:24.499441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.499491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.499510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.796 [2024-07-15 13:16:24.499523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:27.796 [2024-07-15 13:16:24.499540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.499564] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:27.796 [2024-07-15 13:16:24.499741] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:27.796 [2024-07-15 13:16:24.499762] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.796 [2024-07-15 13:16:24.499783] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:27.796 [2024-07-15 13:16:24.499800] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.796 [2024-07-15 13:16:24.499825] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.796 [2024-07-15 13:16:24.499841] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:27.796 [2024-07-15 13:16:24.499863] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.796 [2024-07-15 13:16:24.499878] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:27.796 [2024-07-15 13:16:24.499892] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:27.796 [2024-07-15 13:16:24.499905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.499919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.796 [2024-07-15 13:16:24.499931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:27.796 [2024-07-15 13:16:24.499955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.500045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.796 [2024-07-15 13:16:24.500065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.796 [2024-07-15 13:16:24.500078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:27.796 [2024-07-15 13:16:24.500096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.796 [2024-07-15 13:16:24.500401] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.796 [2024-07-15 13:16:24.500571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.796 [2024-07-15 13:16:24.500628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.796 [2024-07-15 13:16:24.500840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.500895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.796 [2024-07-15 13:16:24.500990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.796 [2024-07-15 13:16:24.501119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.796 [2024-07-15 13:16:24.501302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.796 [2024-07-15 13:16:24.501345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:27.796 [2024-07-15 13:16:24.501410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.796 [2024-07-15 13:16:24.501432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.796 [2024-07-15 13:16:24.501443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:27.796 [2024-07-15 13:16:24.501457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.796 [2024-07-15 13:16:24.501481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.796 [2024-07-15 13:16:24.501517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.796 [2024-07-15 13:16:24.501553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.796 [2024-07-15 13:16:24.501589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.796 [2024-07-15 13:16:24.501627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.796 [2024-07-15 13:16:24.501672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.796 [2024-07-15 13:16:24.501696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.796 [2024-07-15 13:16:24.501709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:27.796 [2024-07-15 13:16:24.501719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.796 [2024-07-15 13:16:24.501732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:27.796 [2024-07-15 13:16:24.501743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:27.796 [2024-07-15 13:16:24.501756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:27.796 [2024-07-15 13:16:24.501779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:27.796 [2024-07-15 13:16:24.501797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501810] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.796 [2024-07-15 13:16:24.501822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.796 [2024-07-15 13:16:24.501850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.796 [2024-07-15 13:16:24.501862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.796 [2024-07-15 13:16:24.501877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.797 [2024-07-15 13:16:24.501888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.797 [2024-07-15 13:16:24.501904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.797 [2024-07-15 13:16:24.501915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.797 [2024-07-15 13:16:24.501929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.797 [2024-07-15 13:16:24.501940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.797 [2024-07-15 13:16:24.501959] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.797 [2024-07-15 13:16:24.501975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.501994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:27.797 [2024-07-15 13:16:24.502007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:27.797 [2024-07-15 13:16:24.502036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:27.797 [2024-07-15 13:16:24.502050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:27.797 [2024-07-15 13:16:24.502074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:27.797 [2024-07-15 13:16:24.502087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:27.797 [2024-07-15 13:16:24.502104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:27.797 [2024-07-15 13:16:24.502116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:27.797 [2024-07-15 13:16:24.502131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:27.797 [2024-07-15 13:16:24.502155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:27.797 [2024-07-15 13:16:24.502233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.797 [2024-07-15 13:16:24.502248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.797 [2024-07-15 13:16:24.502291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.797 [2024-07-15 13:16:24.502307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.797 [2024-07-15 13:16:24.502326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.797 [2024-07-15 13:16:24.502346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.797 [2024-07-15 13:16:24.502359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.797 [2024-07-15 13:16:24.502378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:17:27.797 [2024-07-15 13:16:24.502390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.797 [2024-07-15 13:16:24.502494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:27.797 [2024-07-15 13:16:24.502515] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:30.320 [2024-07-15 13:16:27.030896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.320 [2024-07-15 13:16:27.030974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:30.320 [2024-07-15 13:16:27.031028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2528.396 ms 00:17:30.320 [2024-07-15 13:16:27.031048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.320 [2024-07-15 13:16:27.053992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.320 [2024-07-15 13:16:27.054085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.320 [2024-07-15 13:16:27.054114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.777 ms 00:17:30.320 [2024-07-15 13:16:27.054128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.320 [2024-07-15 13:16:27.054341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.320 [2024-07-15 13:16:27.054365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.320 [2024-07-15 13:16:27.054382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:30.320 [2024-07-15 13:16:27.054404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.578 [2024-07-15 13:16:27.067189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.578 [2024-07-15 13:16:27.067252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.578 [2024-07-15 13:16:27.067277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.712 ms 00:17:30.578 [2024-07-15 13:16:27.067291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.578 [2024-07-15 13:16:27.067353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.578 [2024-07-15 13:16:27.067373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.578 [2024-07-15 13:16:27.067389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.578 [2024-07-15 13:16:27.067402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.578 [2024-07-15 13:16:27.068034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.578 [2024-07-15 13:16:27.068054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.578 [2024-07-15 13:16:27.068070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:17:30.579 [2024-07-15 13:16:27.068083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.068281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.068305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.579 [2024-07-15 13:16:27.068322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:17:30.579 [2024-07-15 13:16:27.068334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.076121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.076188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.579 [2024-07-15 13:16:27.076210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.752 ms 00:17:30.579 [2024-07-15 13:16:27.076231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.086449] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:30.579 [2024-07-15 13:16:27.094270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.094322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.579 [2024-07-15 13:16:27.094343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.939 ms 00:17:30.579 [2024-07-15 13:16:27.094358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.157093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.157216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:30.579 [2024-07-15 13:16:27.157241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.683 ms 00:17:30.579 [2024-07-15 13:16:27.157278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.157546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.157574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.579 [2024-07-15 13:16:27.157589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:30.579 [2024-07-15 13:16:27.157603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.161587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.161653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:30.579 [2024-07-15 13:16:27.161672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:17:30.579 [2024-07-15 13:16:27.161701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.164961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.165010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:30.579 [2024-07-15 13:16:27.165029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:17:30.579 [2024-07-15 13:16:27.165044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.165508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.165539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.579 [2024-07-15 13:16:27.165554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:30.579 [2024-07-15 13:16:27.165571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.202936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.203032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:30.579 [2024-07-15 13:16:27.203055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.313 ms 00:17:30.579 [2024-07-15 13:16:27.203074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.208284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.208343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:30.579 [2024-07-15 13:16:27.208364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.110 ms 00:17:30.579 [2024-07-15 13:16:27.208395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.212085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.212166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:30.579 [2024-07-15 13:16:27.212186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:17:30.579 [2024-07-15 13:16:27.212200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.216336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.216387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.579 [2024-07-15 13:16:27.216406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:17:30.579 [2024-07-15 13:16:27.216424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.216475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.216510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.579 [2024-07-15 13:16:27.216524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.579 [2024-07-15 13:16:27.216539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.216645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.579 [2024-07-15 13:16:27.216668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.579 [2024-07-15 13:16:27.216692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:30.579 [2024-07-15 13:16:27.216716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.579 [2024-07-15 13:16:27.218160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2734.569 ms, result 0 00:17:30.579 { 00:17:30.579 "name": "ftl0", 00:17:30.579 "uuid": "41f02daa-c6a6-4ce1-a78f-a36e89420cbf" 00:17:30.579 } 00:17:30.579 13:16:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:30.579 13:16:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:30.579 13:16:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:30.837 13:16:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:31.095 [2024-07-15 13:16:27.668362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:31.095 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:31.095 Zero copy mechanism will not be used. 00:17:31.095 Running I/O for 4 seconds... 00:17:35.276 00:17:35.276 Latency(us) 00:17:35.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.276 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:35.276 ftl0 : 4.00 1755.77 116.59 0.00 0.00 596.51 245.76 2219.29 00:17:35.276 =================================================================================================================== 00:17:35.276 Total : 1755.77 116.59 0.00 0.00 596.51 245.76 2219.29 00:17:35.276 [2024-07-15 13:16:31.676334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:35.276 0 00:17:35.276 13:16:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:35.276 [2024-07-15 13:16:31.809954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:35.276 Running I/O for 4 seconds... 00:17:39.457 00:17:39.457 Latency(us) 00:17:39.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.457 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.458 ftl0 : 4.02 7041.56 27.51 0.00 0.00 18107.50 351.88 37176.79 00:17:39.458 =================================================================================================================== 00:17:39.458 Total : 7041.56 27.51 0.00 0.00 18107.50 0.00 37176.79 00:17:39.458 [2024-07-15 13:16:35.845018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:39.458 0 00:17:39.458 13:16:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:39.458 [2024-07-15 13:16:35.984211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:39.458 Running I/O for 4 seconds... 00:17:43.639 00:17:43.639 Latency(us) 00:17:43.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.639 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:43.639 Verification LBA range: start 0x0 length 0x1400000 00:17:43.639 ftl0 : 4.01 5642.03 22.04 0.00 0.00 22600.60 379.81 33125.47 00:17:43.639 =================================================================================================================== 00:17:43.639 Total : 5642.03 22.04 0.00 0.00 22600.60 0.00 33125.47 00:17:43.639 0 00:17:43.639 [2024-07-15 13:16:40.008728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:43.639 13:16:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:43.639 [2024-07-15 13:16:40.245282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.639 [2024-07-15 13:16:40.245368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.639 [2024-07-15 13:16:40.245395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.639 [2024-07-15 13:16:40.245416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.639 [2024-07-15 13:16:40.245461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.639 [2024-07-15 13:16:40.246356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.639 [2024-07-15 13:16:40.246395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.639 [2024-07-15 13:16:40.246418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:17:43.639 [2024-07-15 13:16:40.246434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.639 [2024-07-15 13:16:40.249640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.639 [2024-07-15 13:16:40.249694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.639 [2024-07-15 13:16:40.249731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:17:43.639 [2024-07-15 13:16:40.249747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.898 [2024-07-15 13:16:40.438536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.898 [2024-07-15 13:16:40.438618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.898 [2024-07-15 13:16:40.438650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 188.737 ms 00:17:43.898 [2024-07-15 13:16:40.438664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.898 [2024-07-15 13:16:40.445409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.898 [2024-07-15 13:16:40.445446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:43.898 [2024-07-15 13:16:40.445466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:17:43.898 [2024-07-15 13:16:40.445478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.898 [2024-07-15 13:16:40.447442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.898 [2024-07-15 13:16:40.447483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.899 [2024-07-15 13:16:40.447502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:17:43.899 [2024-07-15 13:16:40.447514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.452284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.452328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.899 [2024-07-15 13:16:40.452350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:17:43.899 [2024-07-15 13:16:40.452363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.452505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.452525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.899 [2024-07-15 13:16:40.452542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:43.899 [2024-07-15 13:16:40.452554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.454286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.454323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:43.899 [2024-07-15 13:16:40.454342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:17:43.899 [2024-07-15 13:16:40.454353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.455701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.455738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:43.899 [2024-07-15 13:16:40.455756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:17:43.899 [2024-07-15 13:16:40.455768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.456962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.456999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.899 [2024-07-15 13:16:40.457017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:17:43.899 [2024-07-15 13:16:40.457027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.458092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.899 [2024-07-15 13:16:40.458130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.899 [2024-07-15 13:16:40.458165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:17:43.899 [2024-07-15 13:16:40.458180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.899 [2024-07-15 13:16:40.458224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.899 [2024-07-15 13:16:40.458251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.458994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.459897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.899 [2024-07-15 13:16:40.460542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.900 [2024-07-15 13:16:40.460969] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.900 [2024-07-15 13:16:40.460985] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 41f02daa-c6a6-4ce1-a78f-a36e89420cbf 00:17:43.900 [2024-07-15 13:16:40.461008] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.900 [2024-07-15 13:16:40.461023] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.900 [2024-07-15 13:16:40.461034] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.900 [2024-07-15 13:16:40.461048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.900 [2024-07-15 13:16:40.461069] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.900 [2024-07-15 13:16:40.461088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.900 [2024-07-15 13:16:40.461100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.900 [2024-07-15 13:16:40.461113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.900 [2024-07-15 13:16:40.461123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.900 [2024-07-15 13:16:40.461139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.900 [2024-07-15 13:16:40.461169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.900 [2024-07-15 13:16:40.461189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:17:43.900 [2024-07-15 13:16:40.461201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.463336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.900 [2024-07-15 13:16:40.463360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.900 [2024-07-15 13:16:40.463385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:17:43.900 [2024-07-15 13:16:40.463398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.463530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.900 [2024-07-15 13:16:40.463548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.900 [2024-07-15 13:16:40.463563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:43.900 [2024-07-15 13:16:40.463575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.471040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.471268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.900 [2024-07-15 13:16:40.471391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.471525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.471655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.471725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.900 [2024-07-15 13:16:40.471854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.471907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.472099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.472297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.900 [2024-07-15 13:16:40.472422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.472538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.472615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.472727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.900 [2024-07-15 13:16:40.472857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.472916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.487789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.488073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.900 [2024-07-15 13:16:40.488241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.488295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.498364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.498620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.900 [2024-07-15 13:16:40.498753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.498804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.498958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.499021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.900 [2024-07-15 13:16:40.499157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.499214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.499415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.499482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.900 [2024-07-15 13:16:40.499533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.499654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.499821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.499885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.900 [2024-07-15 13:16:40.500002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.500055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.500243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.500311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:43.900 [2024-07-15 13:16:40.500361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.500465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.500578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.500642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.900 [2024-07-15 13:16:40.500691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.500817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.500913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.900 [2024-07-15 13:16:40.500965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.900 [2024-07-15 13:16:40.501076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.900 [2024-07-15 13:16:40.501243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.900 [2024-07-15 13:16:40.501474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 256.136 ms, result 0 00:17:43.900 true 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 89158 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 89158 ']' 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 89158 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89158 00:17:43.900 killing process with pid 89158 00:17:43.900 Received shutdown signal, test time was about 4.000000 seconds 00:17:43.900 00:17:43.900 Latency(us) 00:17:43.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.900 =================================================================================================================== 00:17:43.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89158' 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 89158 00:17:43.900 13:16:40 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 89158 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:47.183 Remove shared memory files 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:47.183 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:47.184 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:47.184 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:47.184 13:16:43 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:47.184 ************************************ 00:17:47.184 END TEST ftl_bdevperf 00:17:47.184 ************************************ 00:17:47.184 00:17:47.184 real 0m24.014s 00:17:47.184 user 0m27.546s 00:17:47.184 sys 0m1.218s 00:17:47.184 13:16:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:47.184 13:16:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:47.184 13:16:43 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:47.184 13:16:43 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:47.184 13:16:43 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:47.184 13:16:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.184 ************************************ 00:17:47.184 START TEST ftl_trim 00:17:47.184 ************************************ 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:47.184 * Looking for test storage... 00:17:47.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89501 00:17:47.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89501 00:17:47.184 13:16:43 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89501 ']' 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.184 13:16:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:47.442 [2024-07-15 13:16:43.933575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:47.442 [2024-07-15 13:16:43.933773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89501 ] 00:17:47.442 [2024-07-15 13:16:44.083970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.700 [2024-07-15 13:16:44.189368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.700 [2024-07-15 13:16:44.189440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.700 [2024-07-15 13:16:44.189400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.266 13:16:44 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:48.266 13:16:44 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:48.266 13:16:44 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:48.524 13:16:45 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:48.524 13:16:45 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:48.524 13:16:45 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:48.524 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:48.524 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:48.524 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:48.524 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:48.524 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:48.784 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.784 { 00:17:48.784 "name": "nvme0n1", 00:17:48.784 "aliases": [ 00:17:48.784 "6c2daf23-f303-4d6e-95b5-9265c7d9be0c" 00:17:48.784 ], 00:17:48.784 "product_name": "NVMe disk", 00:17:48.784 "block_size": 4096, 00:17:48.784 "num_blocks": 1310720, 00:17:48.784 "uuid": "6c2daf23-f303-4d6e-95b5-9265c7d9be0c", 00:17:48.784 "assigned_rate_limits": { 00:17:48.784 "rw_ios_per_sec": 0, 00:17:48.784 "rw_mbytes_per_sec": 0, 00:17:48.784 "r_mbytes_per_sec": 0, 00:17:48.784 "w_mbytes_per_sec": 0 00:17:48.784 }, 00:17:48.784 "claimed": true, 00:17:48.784 "claim_type": "read_many_write_one", 00:17:48.784 "zoned": false, 00:17:48.784 "supported_io_types": { 00:17:48.784 "read": true, 00:17:48.784 "write": true, 00:17:48.784 "unmap": true, 00:17:48.784 "write_zeroes": true, 00:17:48.784 "flush": true, 00:17:48.784 "reset": true, 00:17:48.784 "compare": true, 00:17:48.784 "compare_and_write": false, 00:17:48.784 "abort": true, 00:17:48.784 "nvme_admin": true, 00:17:48.784 "nvme_io": true 00:17:48.784 }, 00:17:48.784 "driver_specific": { 00:17:48.784 "nvme": [ 00:17:48.784 { 00:17:48.784 "pci_address": "0000:00:11.0", 00:17:48.784 "trid": { 00:17:48.784 "trtype": "PCIe", 00:17:48.784 "traddr": "0000:00:11.0" 00:17:48.784 }, 00:17:48.784 "ctrlr_data": { 00:17:48.784 "cntlid": 0, 00:17:48.784 "vendor_id": "0x1b36", 00:17:48.784 "model_number": "QEMU NVMe Ctrl", 00:17:48.784 "serial_number": "12341", 00:17:48.784 "firmware_revision": "8.0.0", 00:17:48.784 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:48.784 "oacs": { 00:17:48.784 "security": 0, 00:17:48.784 "format": 1, 00:17:48.784 "firmware": 0, 00:17:48.784 "ns_manage": 1 00:17:48.785 }, 00:17:48.785 "multi_ctrlr": false, 00:17:48.785 "ana_reporting": false 00:17:48.785 }, 00:17:48.785 "vs": { 00:17:48.785 "nvme_version": "1.4" 00:17:48.785 }, 00:17:48.785 "ns_data": { 00:17:48.785 "id": 1, 00:17:48.785 "can_share": false 00:17:48.785 } 00:17:48.785 } 00:17:48.785 ], 00:17:48.785 "mp_policy": "active_passive" 00:17:48.785 } 00:17:48.785 } 00:17:48.785 ]' 00:17:48.785 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:49.048 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:49.048 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:49.048 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:49.048 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:49.048 13:16:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:17:49.048 13:16:45 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:49.048 13:16:45 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:49.048 13:16:45 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:49.048 13:16:45 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:49.048 13:16:45 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:49.305 13:16:45 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=603b4d03-c66a-4151-85a8-f8ddbc340fb6 00:17:49.305 13:16:45 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:49.305 13:16:45 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 603b4d03-c66a-4151-85a8-f8ddbc340fb6 00:17:49.563 13:16:46 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:49.820 13:16:46 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=39e9969d-ed9c-4873-93d3-86e1b06c57d1 00:17:49.820 13:16:46 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 39e9969d-ed9c-4873-93d3-86e1b06c57d1 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:50.078 13:16:46 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.078 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.078 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:50.078 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:50.078 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:50.078 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.336 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:50.336 { 00:17:50.336 "name": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:50.336 "aliases": [ 00:17:50.336 "lvs/nvme0n1p0" 00:17:50.336 ], 00:17:50.336 "product_name": "Logical Volume", 00:17:50.336 "block_size": 4096, 00:17:50.336 "num_blocks": 26476544, 00:17:50.336 "uuid": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:50.336 "assigned_rate_limits": { 00:17:50.336 "rw_ios_per_sec": 0, 00:17:50.336 "rw_mbytes_per_sec": 0, 00:17:50.336 "r_mbytes_per_sec": 0, 00:17:50.336 "w_mbytes_per_sec": 0 00:17:50.336 }, 00:17:50.336 "claimed": false, 00:17:50.336 "zoned": false, 00:17:50.336 "supported_io_types": { 00:17:50.336 "read": true, 00:17:50.336 "write": true, 00:17:50.336 "unmap": true, 00:17:50.336 "write_zeroes": true, 00:17:50.336 "flush": false, 00:17:50.336 "reset": true, 00:17:50.336 "compare": false, 00:17:50.336 "compare_and_write": false, 00:17:50.336 "abort": false, 00:17:50.336 "nvme_admin": false, 00:17:50.336 "nvme_io": false 00:17:50.336 }, 00:17:50.336 "driver_specific": { 00:17:50.336 "lvol": { 00:17:50.336 "lvol_store_uuid": "39e9969d-ed9c-4873-93d3-86e1b06c57d1", 00:17:50.336 "base_bdev": "nvme0n1", 00:17:50.336 "thin_provision": true, 00:17:50.336 "num_allocated_clusters": 0, 00:17:50.336 "snapshot": false, 00:17:50.336 "clone": false, 00:17:50.336 "esnap_clone": false 00:17:50.336 } 00:17:50.336 } 00:17:50.336 } 00:17:50.336 ]' 00:17:50.336 13:16:46 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:50.336 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:50.336 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:50.594 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:50.594 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:50.594 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:50.594 13:16:47 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:50.594 13:16:47 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:50.594 13:16:47 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:50.852 13:16:47 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:50.852 13:16:47 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:50.852 13:16:47 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.852 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=7649332f-1c6c-41f9-8286-f520592e59b6 00:17:50.852 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:50.852 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:50.852 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:50.852 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:51.110 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:51.110 { 00:17:51.110 "name": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:51.110 "aliases": [ 00:17:51.110 "lvs/nvme0n1p0" 00:17:51.110 ], 00:17:51.110 "product_name": "Logical Volume", 00:17:51.110 "block_size": 4096, 00:17:51.110 "num_blocks": 26476544, 00:17:51.110 "uuid": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:51.110 "assigned_rate_limits": { 00:17:51.110 "rw_ios_per_sec": 0, 00:17:51.110 "rw_mbytes_per_sec": 0, 00:17:51.110 "r_mbytes_per_sec": 0, 00:17:51.110 "w_mbytes_per_sec": 0 00:17:51.110 }, 00:17:51.110 "claimed": false, 00:17:51.110 "zoned": false, 00:17:51.110 "supported_io_types": { 00:17:51.110 "read": true, 00:17:51.110 "write": true, 00:17:51.110 "unmap": true, 00:17:51.110 "write_zeroes": true, 00:17:51.110 "flush": false, 00:17:51.110 "reset": true, 00:17:51.110 "compare": false, 00:17:51.110 "compare_and_write": false, 00:17:51.110 "abort": false, 00:17:51.110 "nvme_admin": false, 00:17:51.110 "nvme_io": false 00:17:51.110 }, 00:17:51.110 "driver_specific": { 00:17:51.110 "lvol": { 00:17:51.110 "lvol_store_uuid": "39e9969d-ed9c-4873-93d3-86e1b06c57d1", 00:17:51.110 "base_bdev": "nvme0n1", 00:17:51.110 "thin_provision": true, 00:17:51.110 "num_allocated_clusters": 0, 00:17:51.110 "snapshot": false, 00:17:51.110 "clone": false, 00:17:51.110 "esnap_clone": false 00:17:51.111 } 00:17:51.111 } 00:17:51.111 } 00:17:51.111 ]' 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:51.111 13:16:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:51.111 13:16:47 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:51.111 13:16:47 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:51.368 13:16:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:51.368 13:16:48 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:51.368 13:16:48 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:51.368 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=7649332f-1c6c-41f9-8286-f520592e59b6 00:17:51.368 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:51.368 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:51.368 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:51.368 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7649332f-1c6c-41f9-8286-f520592e59b6 00:17:51.626 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:51.626 { 00:17:51.626 "name": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:51.626 "aliases": [ 00:17:51.626 "lvs/nvme0n1p0" 00:17:51.626 ], 00:17:51.626 "product_name": "Logical Volume", 00:17:51.626 "block_size": 4096, 00:17:51.626 "num_blocks": 26476544, 00:17:51.626 "uuid": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:51.626 "assigned_rate_limits": { 00:17:51.626 "rw_ios_per_sec": 0, 00:17:51.626 "rw_mbytes_per_sec": 0, 00:17:51.626 "r_mbytes_per_sec": 0, 00:17:51.626 "w_mbytes_per_sec": 0 00:17:51.626 }, 00:17:51.626 "claimed": false, 00:17:51.626 "zoned": false, 00:17:51.626 "supported_io_types": { 00:17:51.626 "read": true, 00:17:51.626 "write": true, 00:17:51.626 "unmap": true, 00:17:51.626 "write_zeroes": true, 00:17:51.626 "flush": false, 00:17:51.626 "reset": true, 00:17:51.626 "compare": false, 00:17:51.626 "compare_and_write": false, 00:17:51.626 "abort": false, 00:17:51.626 "nvme_admin": false, 00:17:51.626 "nvme_io": false 00:17:51.626 }, 00:17:51.626 "driver_specific": { 00:17:51.626 "lvol": { 00:17:51.627 "lvol_store_uuid": "39e9969d-ed9c-4873-93d3-86e1b06c57d1", 00:17:51.627 "base_bdev": "nvme0n1", 00:17:51.627 "thin_provision": true, 00:17:51.627 "num_allocated_clusters": 0, 00:17:51.627 "snapshot": false, 00:17:51.627 "clone": false, 00:17:51.627 "esnap_clone": false 00:17:51.627 } 00:17:51.627 } 00:17:51.627 } 00:17:51.627 ]' 00:17:51.627 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:51.627 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:51.627 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:51.884 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:51.884 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:51.884 13:16:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:51.884 13:16:48 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:51.884 13:16:48 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7649332f-1c6c-41f9-8286-f520592e59b6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:51.884 [2024-07-15 13:16:48.620678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.141 [2024-07-15 13:16:48.620986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:52.141 [2024-07-15 13:16:48.621026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:52.142 [2024-07-15 13:16:48.621041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.624294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.624453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.142 [2024-07-15 13:16:48.624591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:17:52.142 [2024-07-15 13:16:48.624652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.625051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:52.142 [2024-07-15 13:16:48.625565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:52.142 [2024-07-15 13:16:48.625741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.625862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.142 [2024-07-15 13:16:48.626026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:17:52.142 [2024-07-15 13:16:48.626104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.626543] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:17:52.142 [2024-07-15 13:16:48.628491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.628644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:52.142 [2024-07-15 13:16:48.628764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:52.142 [2024-07-15 13:16:48.628818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.638580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.638873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.142 [2024-07-15 13:16:48.638994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.619 ms 00:17:52.142 [2024-07-15 13:16:48.639049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.639325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.639400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.142 [2024-07-15 13:16:48.639454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:52.142 [2024-07-15 13:16:48.639551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.639653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.639779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:52.142 [2024-07-15 13:16:48.639840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:52.142 [2024-07-15 13:16:48.639892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.640079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:52.142 [2024-07-15 13:16:48.642432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.642461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.142 [2024-07-15 13:16:48.642496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.360 ms 00:17:52.142 [2024-07-15 13:16:48.642512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.642577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.642594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:52.142 [2024-07-15 13:16:48.642611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:52.142 [2024-07-15 13:16:48.642638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.642685] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:52.142 [2024-07-15 13:16:48.642866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:52.142 [2024-07-15 13:16:48.642899] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:52.142 [2024-07-15 13:16:48.642918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:52.142 [2024-07-15 13:16:48.642938] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:52.142 [2024-07-15 13:16:48.642966] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:52.142 [2024-07-15 13:16:48.642983] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:52.142 [2024-07-15 13:16:48.642998] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:52.142 [2024-07-15 13:16:48.643026] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:52.142 [2024-07-15 13:16:48.643038] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:52.142 [2024-07-15 13:16:48.643054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.643069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:52.142 [2024-07-15 13:16:48.643085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:52.142 [2024-07-15 13:16:48.643097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.643365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.142 [2024-07-15 13:16:48.643434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:52.142 [2024-07-15 13:16:48.643568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:17:52.142 [2024-07-15 13:16:48.643650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.142 [2024-07-15 13:16:48.643926] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:52.142 [2024-07-15 13:16:48.644059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:52.142 [2024-07-15 13:16:48.644131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.142 [2024-07-15 13:16:48.644259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.644325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:52.142 [2024-07-15 13:16:48.644375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.644486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:52.142 [2024-07-15 13:16:48.644590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:52.142 [2024-07-15 13:16:48.644648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:52.142 [2024-07-15 13:16:48.644688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.142 [2024-07-15 13:16:48.644796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:52.142 [2024-07-15 13:16:48.644846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:52.142 [2024-07-15 13:16:48.644889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.142 [2024-07-15 13:16:48.644987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:52.142 [2024-07-15 13:16:48.645045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:52.142 [2024-07-15 13:16:48.645085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.645133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:52.142 [2024-07-15 13:16:48.645234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:52.142 [2024-07-15 13:16:48.645279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.645376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:52.142 [2024-07-15 13:16:48.645429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:52.142 [2024-07-15 13:16:48.645618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.142 [2024-07-15 13:16:48.645674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:52.142 [2024-07-15 13:16:48.645714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:52.142 [2024-07-15 13:16:48.645760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.142 [2024-07-15 13:16:48.645837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:52.142 [2024-07-15 13:16:48.645887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:52.142 [2024-07-15 13:16:48.645925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.142 [2024-07-15 13:16:48.645965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:52.142 [2024-07-15 13:16:48.646002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:52.142 [2024-07-15 13:16:48.646045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.142 [2024-07-15 13:16:48.646193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:52.142 [2024-07-15 13:16:48.646250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:52.142 [2024-07-15 13:16:48.646291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.142 [2024-07-15 13:16:48.646331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:52.142 [2024-07-15 13:16:48.646416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:52.142 [2024-07-15 13:16:48.646459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.142 [2024-07-15 13:16:48.646556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:52.142 [2024-07-15 13:16:48.646610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:52.142 [2024-07-15 13:16:48.646737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.646792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:52.142 [2024-07-15 13:16:48.646887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:52.142 [2024-07-15 13:16:48.646994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.647042] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:52.142 [2024-07-15 13:16:48.647156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:52.142 [2024-07-15 13:16:48.647266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.142 [2024-07-15 13:16:48.647380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.142 [2024-07-15 13:16:48.647521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:52.142 [2024-07-15 13:16:48.647583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:52.142 [2024-07-15 13:16:48.647689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:52.142 [2024-07-15 13:16:48.647805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:52.142 [2024-07-15 13:16:48.647863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:52.142 [2024-07-15 13:16:48.647982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:52.142 [2024-07-15 13:16:48.648047] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:52.142 [2024-07-15 13:16:48.648207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.142 [2024-07-15 13:16:48.648283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:52.142 [2024-07-15 13:16:48.648443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:52.143 [2024-07-15 13:16:48.648462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:52.143 [2024-07-15 13:16:48.648477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:52.143 [2024-07-15 13:16:48.648489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:52.143 [2024-07-15 13:16:48.648503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:52.143 [2024-07-15 13:16:48.648515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:52.143 [2024-07-15 13:16:48.648532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:52.143 [2024-07-15 13:16:48.648544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:52.143 [2024-07-15 13:16:48.648558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:52.143 [2024-07-15 13:16:48.648622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:52.143 [2024-07-15 13:16:48.648638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:52.143 [2024-07-15 13:16:48.648671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:52.143 [2024-07-15 13:16:48.648683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:52.143 [2024-07-15 13:16:48.648698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:52.143 [2024-07-15 13:16:48.648712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.143 [2024-07-15 13:16:48.648727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:52.143 [2024-07-15 13:16:48.648741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:17:52.143 [2024-07-15 13:16:48.648758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.143 [2024-07-15 13:16:48.648886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:52.143 [2024-07-15 13:16:48.648913] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:54.669 [2024-07-15 13:16:51.091672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.091991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:54.669 [2024-07-15 13:16:51.092138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2442.790 ms 00:17:54.669 [2024-07-15 13:16:51.092323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.106985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.107294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.669 [2024-07-15 13:16:51.107431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.448 ms 00:17:54.669 [2024-07-15 13:16:51.107580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.107836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.107985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.669 [2024-07-15 13:16:51.108128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:54.669 [2024-07-15 13:16:51.108207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.129561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.129907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.669 [2024-07-15 13:16:51.130095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.208 ms 00:17:54.669 [2024-07-15 13:16:51.130137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.130328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.130398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.669 [2024-07-15 13:16:51.130421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:54.669 [2024-07-15 13:16:51.130446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.131090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.131128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.669 [2024-07-15 13:16:51.131168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:17:54.669 [2024-07-15 13:16:51.131192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.131432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.131471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.669 [2024-07-15 13:16:51.131491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:54.669 [2024-07-15 13:16:51.131511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.142451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.142762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.669 [2024-07-15 13:16:51.142888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.885 ms 00:17:54.669 [2024-07-15 13:16:51.142945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.153427] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.669 [2024-07-15 13:16:51.175182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.175524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.669 [2024-07-15 13:16:51.175677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.918 ms 00:17:54.669 [2024-07-15 13:16:51.175789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.246364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.246661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:54.669 [2024-07-15 13:16:51.246794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.373 ms 00:17:54.669 [2024-07-15 13:16:51.246849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.247266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.247412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.669 [2024-07-15 13:16:51.247539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:17:54.669 [2024-07-15 13:16:51.247613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.251387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.251555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:54.669 [2024-07-15 13:16:51.251685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:17:54.669 [2024-07-15 13:16:51.251728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.254823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.254866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:54.669 [2024-07-15 13:16:51.254888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:17:54.669 [2024-07-15 13:16:51.254901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.255393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.255434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.669 [2024-07-15 13:16:51.255454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:54.669 [2024-07-15 13:16:51.255466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.292903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.292978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:54.669 [2024-07-15 13:16:51.293005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.375 ms 00:17:54.669 [2024-07-15 13:16:51.293038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.298287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.298351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:54.669 [2024-07-15 13:16:51.298374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.171 ms 00:17:54.669 [2024-07-15 13:16:51.298388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.302050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.302109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:54.669 [2024-07-15 13:16:51.302131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:17:54.669 [2024-07-15 13:16:51.302163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.306103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.306163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.669 [2024-07-15 13:16:51.306187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.869 ms 00:17:54.669 [2024-07-15 13:16:51.306201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.306276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.306299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.669 [2024-07-15 13:16:51.306337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:54.669 [2024-07-15 13:16:51.306364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.306468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.669 [2024-07-15 13:16:51.306487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.669 [2024-07-15 13:16:51.306503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:54.669 [2024-07-15 13:16:51.306533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.669 [2024-07-15 13:16:51.307802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.669 [2024-07-15 13:16:51.309140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2686.795 ms, result 0 00:17:54.669 [2024-07-15 13:16:51.310007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.669 { 00:17:54.669 "name": "ftl0", 00:17:54.669 "uuid": "1f2b6391-d3a9-46a0-a156-2b8935f16a29" 00:17:54.669 } 00:17:54.669 13:16:51 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:54.669 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:17:54.669 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:54.669 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:17:54.669 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:54.670 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:54.670 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.927 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:55.184 [ 00:17:55.184 { 00:17:55.184 "name": "ftl0", 00:17:55.184 "aliases": [ 00:17:55.184 "1f2b6391-d3a9-46a0-a156-2b8935f16a29" 00:17:55.184 ], 00:17:55.184 "product_name": "FTL disk", 00:17:55.184 "block_size": 4096, 00:17:55.184 "num_blocks": 23592960, 00:17:55.184 "uuid": "1f2b6391-d3a9-46a0-a156-2b8935f16a29", 00:17:55.184 "assigned_rate_limits": { 00:17:55.184 "rw_ios_per_sec": 0, 00:17:55.184 "rw_mbytes_per_sec": 0, 00:17:55.184 "r_mbytes_per_sec": 0, 00:17:55.184 "w_mbytes_per_sec": 0 00:17:55.184 }, 00:17:55.184 "claimed": false, 00:17:55.184 "zoned": false, 00:17:55.184 "supported_io_types": { 00:17:55.184 "read": true, 00:17:55.184 "write": true, 00:17:55.184 "unmap": true, 00:17:55.184 "write_zeroes": true, 00:17:55.184 "flush": true, 00:17:55.184 "reset": false, 00:17:55.184 "compare": false, 00:17:55.184 "compare_and_write": false, 00:17:55.184 "abort": false, 00:17:55.184 "nvme_admin": false, 00:17:55.184 "nvme_io": false 00:17:55.185 }, 00:17:55.185 "driver_specific": { 00:17:55.185 "ftl": { 00:17:55.185 "base_bdev": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:55.185 "cache": "nvc0n1p0" 00:17:55.185 } 00:17:55.185 } 00:17:55.185 } 00:17:55.185 ] 00:17:55.185 13:16:51 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:17:55.185 13:16:51 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:55.185 13:16:51 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:55.442 13:16:52 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:55.442 13:16:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:55.700 13:16:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:55.700 { 00:17:55.700 "name": "ftl0", 00:17:55.700 "aliases": [ 00:17:55.700 "1f2b6391-d3a9-46a0-a156-2b8935f16a29" 00:17:55.700 ], 00:17:55.700 "product_name": "FTL disk", 00:17:55.700 "block_size": 4096, 00:17:55.700 "num_blocks": 23592960, 00:17:55.700 "uuid": "1f2b6391-d3a9-46a0-a156-2b8935f16a29", 00:17:55.700 "assigned_rate_limits": { 00:17:55.700 "rw_ios_per_sec": 0, 00:17:55.700 "rw_mbytes_per_sec": 0, 00:17:55.700 "r_mbytes_per_sec": 0, 00:17:55.700 "w_mbytes_per_sec": 0 00:17:55.700 }, 00:17:55.700 "claimed": false, 00:17:55.700 "zoned": false, 00:17:55.700 "supported_io_types": { 00:17:55.700 "read": true, 00:17:55.700 "write": true, 00:17:55.700 "unmap": true, 00:17:55.700 "write_zeroes": true, 00:17:55.700 "flush": true, 00:17:55.700 "reset": false, 00:17:55.700 "compare": false, 00:17:55.700 "compare_and_write": false, 00:17:55.700 "abort": false, 00:17:55.700 "nvme_admin": false, 00:17:55.700 "nvme_io": false 00:17:55.700 }, 00:17:55.700 "driver_specific": { 00:17:55.700 "ftl": { 00:17:55.700 "base_bdev": "7649332f-1c6c-41f9-8286-f520592e59b6", 00:17:55.700 "cache": "nvc0n1p0" 00:17:55.700 } 00:17:55.700 } 00:17:55.700 } 00:17:55.700 ]' 00:17:55.700 13:16:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:55.700 13:16:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:55.700 13:16:52 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:55.958 [2024-07-15 13:16:52.677519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.677597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.958 [2024-07-15 13:16:52.677621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.958 [2024-07-15 13:16:52.677656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.958 [2024-07-15 13:16:52.677710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.958 [2024-07-15 13:16:52.678594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.678621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.958 [2024-07-15 13:16:52.678640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:17:55.958 [2024-07-15 13:16:52.678652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.958 [2024-07-15 13:16:52.679280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.679316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.958 [2024-07-15 13:16:52.679335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:17:55.958 [2024-07-15 13:16:52.679347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.958 [2024-07-15 13:16:52.682950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.683000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.958 [2024-07-15 13:16:52.683021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:17:55.958 [2024-07-15 13:16:52.683034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.958 [2024-07-15 13:16:52.690523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.690750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.958 [2024-07-15 13:16:52.690807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.407 ms 00:17:55.958 [2024-07-15 13:16:52.690821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.958 [2024-07-15 13:16:52.692832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.958 [2024-07-15 13:16:52.692877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.958 [2024-07-15 13:16:52.692909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:17:55.958 [2024-07-15 13:16:52.692924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.697473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.697533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.217 [2024-07-15 13:16:52.697556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:17:56.217 [2024-07-15 13:16:52.697569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.697795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.697837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.217 [2024-07-15 13:16:52.697872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:56.217 [2024-07-15 13:16:52.697886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.699547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.699587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:56.217 [2024-07-15 13:16:52.699607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:17:56.217 [2024-07-15 13:16:52.699619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.700992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.701031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:56.217 [2024-07-15 13:16:52.701050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:17:56.217 [2024-07-15 13:16:52.701062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.702215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.702252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.217 [2024-07-15 13:16:52.702271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:17:56.217 [2024-07-15 13:16:52.702283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.703475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-07-15 13:16:52.703512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.217 [2024-07-15 13:16:52.703531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:17:56.217 [2024-07-15 13:16:52.703543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-07-15 13:16:52.703611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.217 [2024-07-15 13:16:52.703637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.703986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.217 [2024-07-15 13:16:52.704367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.704992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.218 [2024-07-15 13:16:52.705170] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.218 [2024-07-15 13:16:52.705187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:17:56.218 [2024-07-15 13:16:52.705201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.218 [2024-07-15 13:16:52.705235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.218 [2024-07-15 13:16:52.705254] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.218 [2024-07-15 13:16:52.705274] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.218 [2024-07-15 13:16:52.705286] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.218 [2024-07-15 13:16:52.705301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.218 [2024-07-15 13:16:52.705314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.218 [2024-07-15 13:16:52.705327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.218 [2024-07-15 13:16:52.705338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.218 [2024-07-15 13:16:52.705353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-07-15 13:16:52.705366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.218 [2024-07-15 13:16:52.705382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:17:56.218 [2024-07-15 13:16:52.705394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.708107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-07-15 13:16:52.708295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.218 [2024-07-15 13:16:52.708430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:17:56.218 [2024-07-15 13:16:52.708456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.708660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-07-15 13:16:52.708690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.218 [2024-07-15 13:16:52.708710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:56.218 [2024-07-15 13:16:52.708723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.717243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.717322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.218 [2024-07-15 13:16:52.717346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.717360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.717534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.717554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.218 [2024-07-15 13:16:52.717571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.717604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.717713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.717733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.218 [2024-07-15 13:16:52.717754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.717766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.717811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.717826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.218 [2024-07-15 13:16:52.717842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.717855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.735490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.735573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.218 [2024-07-15 13:16:52.735613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.735630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.746252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.746327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.218 [2024-07-15 13:16:52.746351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.218 [2024-07-15 13:16:52.746365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-07-15 13:16:52.746518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.218 [2024-07-15 13:16:52.746538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.219 [2024-07-15 13:16:52.746554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.746571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.746648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.219 [2024-07-15 13:16:52.746664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.219 [2024-07-15 13:16:52.746680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.746692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.746831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.219 [2024-07-15 13:16:52.746852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.219 [2024-07-15 13:16:52.746868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.746880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.746964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.219 [2024-07-15 13:16:52.746983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.219 [2024-07-15 13:16:52.747000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.747013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.747092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.219 [2024-07-15 13:16:52.747110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.219 [2024-07-15 13:16:52.747125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.747137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.747256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.219 [2024-07-15 13:16:52.747274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.219 [2024-07-15 13:16:52.747291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.219 [2024-07-15 13:16:52.747322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-07-15 13:16:52.747583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.003 ms, result 0 00:17:56.219 true 00:17:56.219 13:16:52 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89501 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89501 ']' 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89501 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89501 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89501' 00:17:56.219 killing process with pid 89501 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89501 00:17:56.219 13:16:52 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89501 00:17:59.501 13:16:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:00.872 65536+0 records in 00:18:00.872 65536+0 records out 00:18:00.872 268435456 bytes (268 MB, 256 MiB) copied, 1.16765 s, 230 MB/s 00:18:00.872 13:16:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:00.872 [2024-07-15 13:16:57.298125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:00.872 [2024-07-15 13:16:57.298352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89684 ] 00:18:00.872 [2024-07-15 13:16:57.444858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.872 [2024-07-15 13:16:57.545792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.130 [2024-07-15 13:16:57.673776] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.130 [2024-07-15 13:16:57.673877] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.130 [2024-07-15 13:16:57.828464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.828542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:01.130 [2024-07-15 13:16:57.828563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:01.130 [2024-07-15 13:16:57.828575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.831610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.831659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.130 [2024-07-15 13:16:57.831678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:18:01.130 [2024-07-15 13:16:57.831689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.831837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:01.130 [2024-07-15 13:16:57.832200] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:01.130 [2024-07-15 13:16:57.832227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.832250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.130 [2024-07-15 13:16:57.832267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:01.130 [2024-07-15 13:16:57.832279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.834367] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:01.130 [2024-07-15 13:16:57.837305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.837351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:01.130 [2024-07-15 13:16:57.837368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:18:01.130 [2024-07-15 13:16:57.837381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.837488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.837518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:01.130 [2024-07-15 13:16:57.837531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:01.130 [2024-07-15 13:16:57.837548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.846117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.846199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.130 [2024-07-15 13:16:57.846218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.492 ms 00:18:01.130 [2024-07-15 13:16:57.846241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.846473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.846496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.130 [2024-07-15 13:16:57.846510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:01.130 [2024-07-15 13:16:57.846526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.846578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.846594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:01.130 [2024-07-15 13:16:57.846607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:01.130 [2024-07-15 13:16:57.846619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.846654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:01.130 [2024-07-15 13:16:57.848815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.848852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.130 [2024-07-15 13:16:57.848884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:18:01.130 [2024-07-15 13:16:57.848895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.848952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.848969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:01.130 [2024-07-15 13:16:57.848981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:01.130 [2024-07-15 13:16:57.848991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.849029] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:01.130 [2024-07-15 13:16:57.849058] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:01.130 [2024-07-15 13:16:57.849112] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:01.130 [2024-07-15 13:16:57.849168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:01.130 [2024-07-15 13:16:57.849285] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:01.130 [2024-07-15 13:16:57.849311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:01.130 [2024-07-15 13:16:57.849326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:01.130 [2024-07-15 13:16:57.849341] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849355] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849368] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:01.130 [2024-07-15 13:16:57.849378] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:01.130 [2024-07-15 13:16:57.849389] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:01.130 [2024-07-15 13:16:57.849405] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:01.130 [2024-07-15 13:16:57.849417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.849429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:01.130 [2024-07-15 13:16:57.849440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:01.130 [2024-07-15 13:16:57.849451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.849563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.130 [2024-07-15 13:16:57.849579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:01.130 [2024-07-15 13:16:57.849591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:01.130 [2024-07-15 13:16:57.849601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.130 [2024-07-15 13:16:57.849726] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:01.130 [2024-07-15 13:16:57.849750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:01.130 [2024-07-15 13:16:57.849763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:01.130 [2024-07-15 13:16:57.849807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:01.130 [2024-07-15 13:16:57.849840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.130 [2024-07-15 13:16:57.849861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:01.130 [2024-07-15 13:16:57.849871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:01.130 [2024-07-15 13:16:57.849885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.130 [2024-07-15 13:16:57.849896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:01.130 [2024-07-15 13:16:57.849907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:01.130 [2024-07-15 13:16:57.849917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:01.130 [2024-07-15 13:16:57.849938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:01.130 [2024-07-15 13:16:57.849970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:01.130 [2024-07-15 13:16:57.849987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.130 [2024-07-15 13:16:57.849998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:01.130 [2024-07-15 13:16:57.850008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.130 [2024-07-15 13:16:57.850029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:01.130 [2024-07-15 13:16:57.850039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.130 [2024-07-15 13:16:57.850066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:01.130 [2024-07-15 13:16:57.850093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.130 [2024-07-15 13:16:57.850116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:01.130 [2024-07-15 13:16:57.850127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.130 [2024-07-15 13:16:57.850163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:01.130 [2024-07-15 13:16:57.850176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:01.130 [2024-07-15 13:16:57.850186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.130 [2024-07-15 13:16:57.850198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:01.130 [2024-07-15 13:16:57.850209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:01.130 [2024-07-15 13:16:57.850220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:01.130 [2024-07-15 13:16:57.850241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:01.130 [2024-07-15 13:16:57.850252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850262] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:01.130 [2024-07-15 13:16:57.850277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:01.130 [2024-07-15 13:16:57.850289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.130 [2024-07-15 13:16:57.850301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.130 [2024-07-15 13:16:57.850312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:01.130 [2024-07-15 13:16:57.850323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:01.130 [2024-07-15 13:16:57.850334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:01.130 [2024-07-15 13:16:57.850345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:01.130 [2024-07-15 13:16:57.850355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:01.130 [2024-07-15 13:16:57.850366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:01.130 [2024-07-15 13:16:57.850382] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:01.130 [2024-07-15 13:16:57.850397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.130 [2024-07-15 13:16:57.850419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:01.131 [2024-07-15 13:16:57.850431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:01.131 [2024-07-15 13:16:57.850443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:01.131 [2024-07-15 13:16:57.850455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:01.131 [2024-07-15 13:16:57.850465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:01.131 [2024-07-15 13:16:57.850481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:01.131 [2024-07-15 13:16:57.850493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:01.131 [2024-07-15 13:16:57.850504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:01.131 [2024-07-15 13:16:57.850516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:01.131 [2024-07-15 13:16:57.850527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:01.131 [2024-07-15 13:16:57.850585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:01.131 [2024-07-15 13:16:57.850610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:01.131 [2024-07-15 13:16:57.850651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:01.131 [2024-07-15 13:16:57.850663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:01.131 [2024-07-15 13:16:57.850674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:01.131 [2024-07-15 13:16:57.850687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.131 [2024-07-15 13:16:57.850703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:01.131 [2024-07-15 13:16:57.850723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:18:01.131 [2024-07-15 13:16:57.850735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.388 [2024-07-15 13:16:57.874523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.388 [2024-07-15 13:16:57.874874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.388 [2024-07-15 13:16:57.875046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.668 ms 00:18:01.388 [2024-07-15 13:16:57.875125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.388 [2024-07-15 13:16:57.875535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.388 [2024-07-15 13:16:57.875709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.388 [2024-07-15 13:16:57.875886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:01.388 [2024-07-15 13:16:57.875953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.388 [2024-07-15 13:16:57.889604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.388 [2024-07-15 13:16:57.889889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.388 [2024-07-15 13:16:57.890009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.484 ms 00:18:01.388 [2024-07-15 13:16:57.890041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.388 [2024-07-15 13:16:57.890224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.388 [2024-07-15 13:16:57.890246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.388 [2024-07-15 13:16:57.890260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.388 [2024-07-15 13:16:57.890290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.388 [2024-07-15 13:16:57.890876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.890911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.389 [2024-07-15 13:16:57.890933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:18:01.389 [2024-07-15 13:16:57.890953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.891129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.891166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.389 [2024-07-15 13:16:57.891181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:01.389 [2024-07-15 13:16:57.891192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.899406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.899478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.389 [2024-07-15 13:16:57.899498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.179 ms 00:18:01.389 [2024-07-15 13:16:57.899511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.902579] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:01.389 [2024-07-15 13:16:57.902631] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:01.389 [2024-07-15 13:16:57.902656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.902670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:01.389 [2024-07-15 13:16:57.902685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:18:01.389 [2024-07-15 13:16:57.902695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.918697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.918814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:01.389 [2024-07-15 13:16:57.918836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:18:01.389 [2024-07-15 13:16:57.918855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.922243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.922291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:01.389 [2024-07-15 13:16:57.922309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:18:01.389 [2024-07-15 13:16:57.922321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.923959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.924002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:01.389 [2024-07-15 13:16:57.924018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:18:01.389 [2024-07-15 13:16:57.924029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.924571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.924617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:01.389 [2024-07-15 13:16:57.924634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:18:01.389 [2024-07-15 13:16:57.924645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.948042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.948126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:01.389 [2024-07-15 13:16:57.948169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.360 ms 00:18:01.389 [2024-07-15 13:16:57.948185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.957244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:01.389 [2024-07-15 13:16:57.979693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.979773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:01.389 [2024-07-15 13:16:57.979818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.325 ms 00:18:01.389 [2024-07-15 13:16:57.979831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.979982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.980011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:01.389 [2024-07-15 13:16:57.980025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:01.389 [2024-07-15 13:16:57.980043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.980121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.980138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:01.389 [2024-07-15 13:16:57.980182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:01.389 [2024-07-15 13:16:57.980196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.980233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.980248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:01.389 [2024-07-15 13:16:57.980272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:01.389 [2024-07-15 13:16:57.980297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.980359] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:01.389 [2024-07-15 13:16:57.980377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.980390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:01.389 [2024-07-15 13:16:57.980404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:01.389 [2024-07-15 13:16:57.980417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.984651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.984702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:01.389 [2024-07-15 13:16:57.984720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:18:01.389 [2024-07-15 13:16:57.984733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.984847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.389 [2024-07-15 13:16:57.984867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:01.389 [2024-07-15 13:16:57.984881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:01.389 [2024-07-15 13:16:57.984893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.389 [2024-07-15 13:16:57.986134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.389 [2024-07-15 13:16:57.987458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.329 ms, result 0 00:18:01.389 [2024-07-15 13:16:57.988348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.389 [2024-07-15 13:16:57.996556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.357  Copying: 25/256 [MB] (25 MBps) Copying: 50/256 [MB] (24 MBps) Copying: 76/256 [MB] (26 MBps) Copying: 102/256 [MB] (26 MBps) Copying: 129/256 [MB] (26 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (26 MBps) Copying: 208/256 [MB] (26 MBps) Copying: 234/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-15 13:17:07.805241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.357 [2024-07-15 13:17:07.806888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.357 [2024-07-15 13:17:07.806937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.357 [2024-07-15 13:17:07.806958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.357 [2024-07-15 13:17:07.806970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.357 [2024-07-15 13:17:07.807002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:11.357 [2024-07-15 13:17:07.807827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.357 [2024-07-15 13:17:07.807860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.357 [2024-07-15 13:17:07.807875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:18:11.358 [2024-07-15 13:17:07.807887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.809456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.809500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.358 [2024-07-15 13:17:07.809517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:18:11.358 [2024-07-15 13:17:07.809536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.816324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.816369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.358 [2024-07-15 13:17:07.816386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.761 ms 00:18:11.358 [2024-07-15 13:17:07.816398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.823843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.823900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:11.358 [2024-07-15 13:17:07.823916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.399 ms 00:18:11.358 [2024-07-15 13:17:07.823946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.825529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.825568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.358 [2024-07-15 13:17:07.825584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:18:11.358 [2024-07-15 13:17:07.825595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.828951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.828997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.358 [2024-07-15 13:17:07.829014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:18:11.358 [2024-07-15 13:17:07.829025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.829191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.829213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.358 [2024-07-15 13:17:07.829226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:11.358 [2024-07-15 13:17:07.829243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.831049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.831087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:11.358 [2024-07-15 13:17:07.831101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:18:11.358 [2024-07-15 13:17:07.831112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.832663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.832698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:11.358 [2024-07-15 13:17:07.832712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:18:11.358 [2024-07-15 13:17:07.832723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.833921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.833959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.358 [2024-07-15 13:17:07.833974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:18:11.358 [2024-07-15 13:17:07.833984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.835001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.358 [2024-07-15 13:17:07.835039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.358 [2024-07-15 13:17:07.835055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:18:11.358 [2024-07-15 13:17:07.835065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.358 [2024-07-15 13:17:07.835103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.358 [2024-07-15 13:17:07.835126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.358 [2024-07-15 13:17:07.835911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.835996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.836996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.359 [2024-07-15 13:17:07.837747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.359 [2024-07-15 13:17:07.837760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:11.359 [2024-07-15 13:17:07.837786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.359 [2024-07-15 13:17:07.837798] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.359 [2024-07-15 13:17:07.837809] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.359 [2024-07-15 13:17:07.837821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.359 [2024-07-15 13:17:07.837832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.359 [2024-07-15 13:17:07.837844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.359 [2024-07-15 13:17:07.837860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.359 [2024-07-15 13:17:07.837870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.359 [2024-07-15 13:17:07.837880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.359 [2024-07-15 13:17:07.837892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.359 [2024-07-15 13:17:07.837904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.359 [2024-07-15 13:17:07.837927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:18:11.359 [2024-07-15 13:17:07.837938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.840094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.359 [2024-07-15 13:17:07.840125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.359 [2024-07-15 13:17:07.840140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:18:11.359 [2024-07-15 13:17:07.840168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.840315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.359 [2024-07-15 13:17:07.840329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.359 [2024-07-15 13:17:07.840342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:11.359 [2024-07-15 13:17:07.840353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.847850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.848065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.359 [2024-07-15 13:17:07.848210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.848271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.848489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.848555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.359 [2024-07-15 13:17:07.848766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.848818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.848932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.848994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.359 [2024-07-15 13:17:07.849040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.849155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.849334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.849396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.359 [2024-07-15 13:17:07.849585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.849608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.864556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.864629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.359 [2024-07-15 13:17:07.864649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.864661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.874758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.874827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.359 [2024-07-15 13:17:07.874847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.874859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.874942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.874960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.359 [2024-07-15 13:17:07.874972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.874983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.875046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.359 [2024-07-15 13:17:07.875058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.875069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.875206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.359 [2024-07-15 13:17:07.875219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.875231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.875316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:11.359 [2024-07-15 13:17:07.875334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.875346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.875423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.359 [2024-07-15 13:17:07.875436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.875447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.359 [2024-07-15 13:17:07.875535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.359 [2024-07-15 13:17:07.875548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.359 [2024-07-15 13:17:07.875567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.359 [2024-07-15 13:17:07.875776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.837 ms, result 0 00:18:11.623 00:18:11.623 00:18:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.623 13:17:08 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89800 00:18:11.623 13:17:08 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:11.623 13:17:08 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89800 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89800 ']' 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.623 13:17:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:11.880 [2024-07-15 13:17:08.469422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:11.880 [2024-07-15 13:17:08.469867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89800 ] 00:18:11.880 [2024-07-15 13:17:08.616859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.137 [2024-07-15 13:17:08.715289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.701 13:17:09 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.701 13:17:09 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:12.701 13:17:09 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:12.959 [2024-07-15 13:17:09.689297] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.959 [2024-07-15 13:17:09.689385] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.243 [2024-07-15 13:17:09.864319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.243 [2024-07-15 13:17:09.864395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.243 [2024-07-15 13:17:09.864421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.243 [2024-07-15 13:17:09.864444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.243 [2024-07-15 13:17:09.867727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.243 [2024-07-15 13:17:09.867771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.243 [2024-07-15 13:17:09.867794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:18:13.243 [2024-07-15 13:17:09.867807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.243 [2024-07-15 13:17:09.867942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.243 [2024-07-15 13:17:09.868276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.243 [2024-07-15 13:17:09.868306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.243 [2024-07-15 13:17:09.868320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.243 [2024-07-15 13:17:09.868335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:13.243 [2024-07-15 13:17:09.868347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.243 [2024-07-15 13:17:09.870377] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:13.243 [2024-07-15 13:17:09.873324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.243 [2024-07-15 13:17:09.873392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:13.243 [2024-07-15 13:17:09.873412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:18:13.244 [2024-07-15 13:17:09.873430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.873546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.873575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:13.244 [2024-07-15 13:17:09.873605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:13.244 [2024-07-15 13:17:09.873628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.882318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.882425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.244 [2024-07-15 13:17:09.882445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.604 ms 00:18:13.244 [2024-07-15 13:17:09.882464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.882707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.882733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.244 [2024-07-15 13:17:09.882749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:13.244 [2024-07-15 13:17:09.882763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.882815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.882833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.244 [2024-07-15 13:17:09.882845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:13.244 [2024-07-15 13:17:09.882859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.882909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:13.244 [2024-07-15 13:17:09.885027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.885063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.244 [2024-07-15 13:17:09.885095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:18:13.244 [2024-07-15 13:17:09.885120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.885205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.885224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.244 [2024-07-15 13:17:09.885240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:13.244 [2024-07-15 13:17:09.885252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.885287] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:13.244 [2024-07-15 13:17:09.885315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:13.244 [2024-07-15 13:17:09.885361] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:13.244 [2024-07-15 13:17:09.885385] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:13.244 [2024-07-15 13:17:09.885496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.244 [2024-07-15 13:17:09.885513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.244 [2024-07-15 13:17:09.885537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:13.244 [2024-07-15 13:17:09.885552] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.244 [2024-07-15 13:17:09.885569] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.244 [2024-07-15 13:17:09.885581] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:13.244 [2024-07-15 13:17:09.885598] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.244 [2024-07-15 13:17:09.885609] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.244 [2024-07-15 13:17:09.885622] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.244 [2024-07-15 13:17:09.885636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.885650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.244 [2024-07-15 13:17:09.885662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:13.244 [2024-07-15 13:17:09.885686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.885782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.885799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.244 [2024-07-15 13:17:09.885812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:13.244 [2024-07-15 13:17:09.885825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.885937] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.244 [2024-07-15 13:17:09.885969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.244 [2024-07-15 13:17:09.885982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.244 [2024-07-15 13:17:09.885998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.244 [2024-07-15 13:17:09.886030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:13.244 [2024-07-15 13:17:09.886056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.244 [2024-07-15 13:17:09.886067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.244 [2024-07-15 13:17:09.886104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.244 [2024-07-15 13:17:09.886121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:13.244 [2024-07-15 13:17:09.886132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.244 [2024-07-15 13:17:09.886377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.244 [2024-07-15 13:17:09.886448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:13.244 [2024-07-15 13:17:09.886494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.244 [2024-07-15 13:17:09.886574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:13.244 [2024-07-15 13:17:09.886700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.244 [2024-07-15 13:17:09.886796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:13.244 [2024-07-15 13:17:09.886839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.244 [2024-07-15 13:17:09.886961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.244 [2024-07-15 13:17:09.887005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:13.244 [2024-07-15 13:17:09.887044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.244 [2024-07-15 13:17:09.887163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.244 [2024-07-15 13:17:09.887301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:13.244 [2024-07-15 13:17:09.887420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.244 [2024-07-15 13:17:09.887561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.244 [2024-07-15 13:17:09.887620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:13.244 [2024-07-15 13:17:09.887733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.244 [2024-07-15 13:17:09.887788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.244 [2024-07-15 13:17:09.887893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:13.244 [2024-07-15 13:17:09.887920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.244 [2024-07-15 13:17:09.887934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.244 [2024-07-15 13:17:09.887947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:13.244 [2024-07-15 13:17:09.887958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.244 [2024-07-15 13:17:09.887974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.244 [2024-07-15 13:17:09.887985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:13.244 [2024-07-15 13:17:09.887998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.888009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.244 [2024-07-15 13:17:09.888021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:13.244 [2024-07-15 13:17:09.888032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.888044] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.244 [2024-07-15 13:17:09.888056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.244 [2024-07-15 13:17:09.888070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.244 [2024-07-15 13:17:09.888105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.244 [2024-07-15 13:17:09.888125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.244 [2024-07-15 13:17:09.888137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.244 [2024-07-15 13:17:09.888179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.244 [2024-07-15 13:17:09.888195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.244 [2024-07-15 13:17:09.888212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.244 [2024-07-15 13:17:09.888226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.244 [2024-07-15 13:17:09.888251] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.244 [2024-07-15 13:17:09.888268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:13.244 [2024-07-15 13:17:09.888301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:13.244 [2024-07-15 13:17:09.888318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:13.244 [2024-07-15 13:17:09.888331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:13.244 [2024-07-15 13:17:09.888347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:13.244 [2024-07-15 13:17:09.888360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:13.244 [2024-07-15 13:17:09.888377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:13.244 [2024-07-15 13:17:09.888389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:13.244 [2024-07-15 13:17:09.888405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:13.244 [2024-07-15 13:17:09.888419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:13.244 [2024-07-15 13:17:09.888499] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.244 [2024-07-15 13:17:09.888514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.244 [2024-07-15 13:17:09.888552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.244 [2024-07-15 13:17:09.888569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.244 [2024-07-15 13:17:09.888587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.244 [2024-07-15 13:17:09.888607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.888622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.244 [2024-07-15 13:17:09.888639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.728 ms 00:18:13.244 [2024-07-15 13:17:09.888652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.904524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.904889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.244 [2024-07-15 13:17:09.905053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.724 ms 00:18:13.244 [2024-07-15 13:17:09.905110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.905487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.905642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.244 [2024-07-15 13:17:09.905776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:13.244 [2024-07-15 13:17:09.905899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.920069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.920348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.244 [2024-07-15 13:17:09.920484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.082 ms 00:18:13.244 [2024-07-15 13:17:09.920611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.920866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.921008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.244 [2024-07-15 13:17:09.921155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.244 [2024-07-15 13:17:09.921278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.921994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.922206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.244 [2024-07-15 13:17:09.922379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:18:13.244 [2024-07-15 13:17:09.922535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.922967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.923184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.244 [2024-07-15 13:17:09.923375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:13.244 [2024-07-15 13:17:09.923539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.933499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.933802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.244 [2024-07-15 13:17:09.933937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.766 ms 00:18:13.244 [2024-07-15 13:17:09.934007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.937328] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:13.244 [2024-07-15 13:17:09.937512] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:13.244 [2024-07-15 13:17:09.937690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.937714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:13.244 [2024-07-15 13:17:09.937735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:18:13.244 [2024-07-15 13:17:09.937749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.953734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.953827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:13.244 [2024-07-15 13:17:09.953878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.893 ms 00:18:13.244 [2024-07-15 13:17:09.953892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.957116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.957176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:13.244 [2024-07-15 13:17:09.957202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:18:13.244 [2024-07-15 13:17:09.957216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.958913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.958954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:13.244 [2024-07-15 13:17:09.958977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:18:13.244 [2024-07-15 13:17:09.958990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.244 [2024-07-15 13:17:09.959495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.244 [2024-07-15 13:17:09.959547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.244 [2024-07-15 13:17:09.959570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:13.244 [2024-07-15 13:17:09.959583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:09.991773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:09.991841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.503 [2024-07-15 13:17:09.991872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.131 ms 00:18:13.503 [2024-07-15 13:17:09.991887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.001609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.503 [2024-07-15 13:17:10.024045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.024167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.503 [2024-07-15 13:17:10.024191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.000 ms 00:18:13.503 [2024-07-15 13:17:10.024211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.024362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.024390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.503 [2024-07-15 13:17:10.024417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:13.503 [2024-07-15 13:17:10.024444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.024524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.024548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.503 [2024-07-15 13:17:10.024574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:13.503 [2024-07-15 13:17:10.024612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.024650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.024672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.503 [2024-07-15 13:17:10.024686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.503 [2024-07-15 13:17:10.024708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.024761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.503 [2024-07-15 13:17:10.024802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.024815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.503 [2024-07-15 13:17:10.024833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:13.503 [2024-07-15 13:17:10.024846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.029200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.029258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.503 [2024-07-15 13:17:10.029284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.305 ms 00:18:13.503 [2024-07-15 13:17:10.029299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.029409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.503 [2024-07-15 13:17:10.029442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.503 [2024-07-15 13:17:10.029463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:13.503 [2024-07-15 13:17:10.029477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.503 [2024-07-15 13:17:10.030773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.503 [2024-07-15 13:17:10.032217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.050 ms, result 0 00:18:13.503 [2024-07-15 13:17:10.033294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.503 Some configs were skipped because the RPC state that can call them passed over. 00:18:13.503 13:17:10 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:13.760 [2024-07-15 13:17:10.334654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.760 [2024-07-15 13:17:10.334740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:13.760 [2024-07-15 13:17:10.334763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:18:13.760 [2024-07-15 13:17:10.334825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.760 [2024-07-15 13:17:10.334878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.839 ms, result 0 00:18:13.760 true 00:18:13.760 13:17:10 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:14.018 [2024-07-15 13:17:10.614546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.018 [2024-07-15 13:17:10.614807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:14.018 [2024-07-15 13:17:10.614844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:18:14.018 [2024-07-15 13:17:10.614859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.018 [2024-07-15 13:17:10.614949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.594 ms, result 0 00:18:14.018 true 00:18:14.018 13:17:10 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89800 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89800 ']' 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89800 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89800 00:18:14.018 killing process with pid 89800 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89800' 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89800 00:18:14.018 13:17:10 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89800 00:18:14.277 [2024-07-15 13:17:10.860854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.860970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:14.277 [2024-07-15 13:17:10.861013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.277 [2024-07-15 13:17:10.861043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.861130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:14.277 [2024-07-15 13:17:10.862170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.862217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:14.277 [2024-07-15 13:17:10.862251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:18:14.277 [2024-07-15 13:17:10.862279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.862794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.862860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:14.277 [2024-07-15 13:17:10.862897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:18:14.277 [2024-07-15 13:17:10.862923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.867510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.867568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:14.277 [2024-07-15 13:17:10.867593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.515 ms 00:18:14.277 [2024-07-15 13:17:10.867606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.874939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.875018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:14.277 [2024-07-15 13:17:10.875045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.265 ms 00:18:14.277 [2024-07-15 13:17:10.875058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.877072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.877115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:14.277 [2024-07-15 13:17:10.877135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:18:14.277 [2024-07-15 13:17:10.877165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.880737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.880795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:14.277 [2024-07-15 13:17:10.880818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.514 ms 00:18:14.277 [2024-07-15 13:17:10.880831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.881002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.881022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:14.277 [2024-07-15 13:17:10.881039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:14.277 [2024-07-15 13:17:10.881051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.883232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.883271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:14.277 [2024-07-15 13:17:10.883290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:18:14.277 [2024-07-15 13:17:10.883302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.884919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.884957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:14.277 [2024-07-15 13:17:10.884975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:18:14.277 [2024-07-15 13:17:10.884987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.886232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.886277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:14.277 [2024-07-15 13:17:10.886296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:18:14.277 [2024-07-15 13:17:10.886307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.887454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.277 [2024-07-15 13:17:10.887491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:14.277 [2024-07-15 13:17:10.887509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:18:14.277 [2024-07-15 13:17:10.887521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.277 [2024-07-15 13:17:10.887593] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:14.277 [2024-07-15 13:17:10.887619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.887977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:14.277 [2024-07-15 13:17:10.888135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.888886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.889998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.890010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.890028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:14.278 [2024-07-15 13:17:10.890050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:14.278 [2024-07-15 13:17:10.890065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:14.278 [2024-07-15 13:17:10.890078] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:14.278 [2024-07-15 13:17:10.890105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:14.278 [2024-07-15 13:17:10.890123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:14.278 [2024-07-15 13:17:10.890169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:14.278 [2024-07-15 13:17:10.890191] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:14.278 [2024-07-15 13:17:10.890206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:14.278 [2024-07-15 13:17:10.890218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:14.278 [2024-07-15 13:17:10.890230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:14.278 [2024-07-15 13:17:10.890241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:14.278 [2024-07-15 13:17:10.890257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.278 [2024-07-15 13:17:10.890277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:14.278 [2024-07-15 13:17:10.890293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:18:14.278 [2024-07-15 13:17:10.890305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.892796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.278 [2024-07-15 13:17:10.892939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:14.278 [2024-07-15 13:17:10.893053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:18:14.278 [2024-07-15 13:17:10.893103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.893408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.278 [2024-07-15 13:17:10.893532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:14.278 [2024-07-15 13:17:10.893654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:14.278 [2024-07-15 13:17:10.893783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.902068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.278 [2024-07-15 13:17:10.902401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.278 [2024-07-15 13:17:10.902527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.278 [2024-07-15 13:17:10.902661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.902867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.278 [2024-07-15 13:17:10.902930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.278 [2024-07-15 13:17:10.903050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.278 [2024-07-15 13:17:10.903203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.903340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.278 [2024-07-15 13:17:10.903411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.278 [2024-07-15 13:17:10.903527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.278 [2024-07-15 13:17:10.903581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.903702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.278 [2024-07-15 13:17:10.903756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.278 [2024-07-15 13:17:10.903803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.278 [2024-07-15 13:17:10.903844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.278 [2024-07-15 13:17:10.920401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.278 [2024-07-15 13:17:10.920737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.279 [2024-07-15 13:17:10.920865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.920917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.931281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.931615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.279 [2024-07-15 13:17:10.931749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.931811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.931968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.932021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.279 [2024-07-15 13:17:10.932137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.932205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.932379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.932434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.279 [2024-07-15 13:17:10.932566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.932600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.932728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.932746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.279 [2024-07-15 13:17:10.932762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.932777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.932844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.932862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:14.279 [2024-07-15 13:17:10.932877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.932889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.932949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.932964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.279 [2024-07-15 13:17:10.932979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.932994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.933069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.279 [2024-07-15 13:17:10.933086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.279 [2024-07-15 13:17:10.933100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.279 [2024-07-15 13:17:10.933113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.279 [2024-07-15 13:17:10.933332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.447 ms, result 0 00:18:14.537 13:17:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:14.537 13:17:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:14.795 [2024-07-15 13:17:11.296064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:14.795 [2024-07-15 13:17:11.296250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89846 ] 00:18:14.795 [2024-07-15 13:17:11.440332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.053 [2024-07-15 13:17:11.539796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.053 [2024-07-15 13:17:11.665663] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.053 [2024-07-15 13:17:11.665754] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.312 [2024-07-15 13:17:11.818722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.818808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:15.313 [2024-07-15 13:17:11.818840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:15.313 [2024-07-15 13:17:11.818854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.821742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.821790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.313 [2024-07-15 13:17:11.821808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:18:15.313 [2024-07-15 13:17:11.821821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.821941] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:15.313 [2024-07-15 13:17:11.822306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:15.313 [2024-07-15 13:17:11.822333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.822347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.313 [2024-07-15 13:17:11.822363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:18:15.313 [2024-07-15 13:17:11.822385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.824396] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:15.313 [2024-07-15 13:17:11.827349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.827393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:15.313 [2024-07-15 13:17:11.827411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:18:15.313 [2024-07-15 13:17:11.827423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.827531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.827553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:15.313 [2024-07-15 13:17:11.827567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:15.313 [2024-07-15 13:17:11.827583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.836090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.836177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.313 [2024-07-15 13:17:11.836197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.426 ms 00:18:15.313 [2024-07-15 13:17:11.836211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.836439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.836462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.313 [2024-07-15 13:17:11.836476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:15.313 [2024-07-15 13:17:11.836504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.836556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.836572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.313 [2024-07-15 13:17:11.836585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:15.313 [2024-07-15 13:17:11.836596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.836641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:15.313 [2024-07-15 13:17:11.838730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.838766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.313 [2024-07-15 13:17:11.838788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:18:15.313 [2024-07-15 13:17:11.838800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.838852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.838869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.313 [2024-07-15 13:17:11.838882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:15.313 [2024-07-15 13:17:11.838893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.838920] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:15.313 [2024-07-15 13:17:11.838958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:15.313 [2024-07-15 13:17:11.839018] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:15.313 [2024-07-15 13:17:11.839062] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:15.313 [2024-07-15 13:17:11.839229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:15.313 [2024-07-15 13:17:11.839261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.313 [2024-07-15 13:17:11.839277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:15.313 [2024-07-15 13:17:11.839292] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.313 [2024-07-15 13:17:11.839317] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.313 [2024-07-15 13:17:11.839330] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:15.313 [2024-07-15 13:17:11.839358] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.313 [2024-07-15 13:17:11.839373] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:15.313 [2024-07-15 13:17:11.839409] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:15.313 [2024-07-15 13:17:11.839435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.839470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.313 [2024-07-15 13:17:11.839488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:18:15.313 [2024-07-15 13:17:11.839506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.839613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.313 [2024-07-15 13:17:11.839629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.313 [2024-07-15 13:17:11.839651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:15.313 [2024-07-15 13:17:11.839670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.313 [2024-07-15 13:17:11.839791] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.313 [2024-07-15 13:17:11.839817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.313 [2024-07-15 13:17:11.839830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.313 [2024-07-15 13:17:11.839842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.839868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.313 [2024-07-15 13:17:11.839879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.839890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:15.313 [2024-07-15 13:17:11.839901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.313 [2024-07-15 13:17:11.839911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:15.313 [2024-07-15 13:17:11.839921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.313 [2024-07-15 13:17:11.839931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.313 [2024-07-15 13:17:11.839942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:15.313 [2024-07-15 13:17:11.839955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.313 [2024-07-15 13:17:11.839966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.313 [2024-07-15 13:17:11.839977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:15.313 [2024-07-15 13:17:11.839988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.313 [2024-07-15 13:17:11.840011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.313 [2024-07-15 13:17:11.840050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.313 [2024-07-15 13:17:11.840080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.313 [2024-07-15 13:17:11.840111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.313 [2024-07-15 13:17:11.840186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.313 [2024-07-15 13:17:11.840221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.313 [2024-07-15 13:17:11.840241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.313 [2024-07-15 13:17:11.840252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:15.313 [2024-07-15 13:17:11.840262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.313 [2024-07-15 13:17:11.840274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:15.313 [2024-07-15 13:17:11.840285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:15.313 [2024-07-15 13:17:11.840299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:15.313 [2024-07-15 13:17:11.840320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:15.313 [2024-07-15 13:17:11.840330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.313 [2024-07-15 13:17:11.840340] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.313 [2024-07-15 13:17:11.840354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.313 [2024-07-15 13:17:11.840367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.313 [2024-07-15 13:17:11.840378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.314 [2024-07-15 13:17:11.840389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.314 [2024-07-15 13:17:11.840400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.314 [2024-07-15 13:17:11.840411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.314 [2024-07-15 13:17:11.840422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.314 [2024-07-15 13:17:11.840432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.314 [2024-07-15 13:17:11.840442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.314 [2024-07-15 13:17:11.840454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.314 [2024-07-15 13:17:11.840467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:15.314 [2024-07-15 13:17:11.840491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:15.314 [2024-07-15 13:17:11.840502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:15.314 [2024-07-15 13:17:11.840513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:15.314 [2024-07-15 13:17:11.840524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:15.314 [2024-07-15 13:17:11.840538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:15.314 [2024-07-15 13:17:11.840550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:15.314 [2024-07-15 13:17:11.840562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:15.314 [2024-07-15 13:17:11.840573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:15.314 [2024-07-15 13:17:11.840584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:15.314 [2024-07-15 13:17:11.840640] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.314 [2024-07-15 13:17:11.840657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.314 [2024-07-15 13:17:11.840693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.314 [2024-07-15 13:17:11.840704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.314 [2024-07-15 13:17:11.840716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.314 [2024-07-15 13:17:11.840728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.840746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.314 [2024-07-15 13:17:11.840759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:18:15.314 [2024-07-15 13:17:11.840771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.864486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.864559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.314 [2024-07-15 13:17:11.864583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.613 ms 00:18:15.314 [2024-07-15 13:17:11.864604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.864832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.864866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:15.314 [2024-07-15 13:17:11.864881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:15.314 [2024-07-15 13:17:11.864892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.878733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.878805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.314 [2024-07-15 13:17:11.878827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.802 ms 00:18:15.314 [2024-07-15 13:17:11.878847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.878995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.879016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.314 [2024-07-15 13:17:11.879030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:15.314 [2024-07-15 13:17:11.879042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.879672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.879703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.314 [2024-07-15 13:17:11.879718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:15.314 [2024-07-15 13:17:11.879730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.879913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.879932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.314 [2024-07-15 13:17:11.879945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:15.314 [2024-07-15 13:17:11.879956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.888114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.888186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.314 [2024-07-15 13:17:11.888207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.125 ms 00:18:15.314 [2024-07-15 13:17:11.888219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.891301] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:15.314 [2024-07-15 13:17:11.891347] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:15.314 [2024-07-15 13:17:11.891372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.891387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:15.314 [2024-07-15 13:17:11.891399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:18:15.314 [2024-07-15 13:17:11.891410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.907416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.907506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:15.314 [2024-07-15 13:17:11.907529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.942 ms 00:18:15.314 [2024-07-15 13:17:11.907556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.910738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.910782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:15.314 [2024-07-15 13:17:11.910799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:18:15.314 [2024-07-15 13:17:11.910811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.912462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.912503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:15.314 [2024-07-15 13:17:11.912519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:18:15.314 [2024-07-15 13:17:11.912530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.913027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.913069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:15.314 [2024-07-15 13:17:11.913086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:18:15.314 [2024-07-15 13:17:11.913098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.936156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.936252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:15.314 [2024-07-15 13:17:11.936284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:18:15.314 [2024-07-15 13:17:11.936298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.945341] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:15.314 [2024-07-15 13:17:11.967348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.967427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:15.314 [2024-07-15 13:17:11.967470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.865 ms 00:18:15.314 [2024-07-15 13:17:11.967484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.967636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.967656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:15.314 [2024-07-15 13:17:11.967675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:15.314 [2024-07-15 13:17:11.967687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.967775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.967792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:15.314 [2024-07-15 13:17:11.967816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:15.314 [2024-07-15 13:17:11.967828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.967865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.967879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:15.314 [2024-07-15 13:17:11.967905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:15.314 [2024-07-15 13:17:11.967922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.967966] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:15.314 [2024-07-15 13:17:11.967985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.314 [2024-07-15 13:17:11.967998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:15.314 [2024-07-15 13:17:11.968012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:15.314 [2024-07-15 13:17:11.968025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.314 [2024-07-15 13:17:11.972379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.315 [2024-07-15 13:17:11.972424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:15.315 [2024-07-15 13:17:11.972441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:18:15.315 [2024-07-15 13:17:11.972460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.315 [2024-07-15 13:17:11.972554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.315 [2024-07-15 13:17:11.972574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:15.315 [2024-07-15 13:17:11.972587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:15.315 [2024-07-15 13:17:11.972599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.315 [2024-07-15 13:17:11.973861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.315 [2024-07-15 13:17:11.975097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.819 ms, result 0 00:18:15.315 [2024-07-15 13:17:11.976008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:15.315 [2024-07-15 13:17:11.984111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:26.206  Copying: 27/256 [MB] (27 MBps) Copying: 50/256 [MB] (22 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 122/256 [MB] (23 MBps) Copying: 144/256 [MB] (22 MBps) Copying: 169/256 [MB] (24 MBps) Copying: 192/256 [MB] (22 MBps) Copying: 216/256 [MB] (24 MBps) Copying: 240/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-15 13:17:22.702324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.206 [2024-07-15 13:17:22.704287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.704450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.206 [2024-07-15 13:17:22.704584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.206 [2024-07-15 13:17:22.704639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.704710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:26.206 [2024-07-15 13:17:22.705805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.705952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.206 [2024-07-15 13:17:22.706074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:18:26.206 [2024-07-15 13:17:22.706158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.706584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.706609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.206 [2024-07-15 13:17:22.706631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:18:26.206 [2024-07-15 13:17:22.706643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.710295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.710326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.206 [2024-07-15 13:17:22.710342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:18:26.206 [2024-07-15 13:17:22.710354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.717597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.717643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.206 [2024-07-15 13:17:22.717659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.219 ms 00:18:26.206 [2024-07-15 13:17:22.717676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.719112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.719171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.206 [2024-07-15 13:17:22.719190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:18:26.206 [2024-07-15 13:17:22.719202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.722880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.722923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.206 [2024-07-15 13:17:22.722940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:18:26.206 [2024-07-15 13:17:22.722952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.723100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.723120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.206 [2024-07-15 13:17:22.723139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:26.206 [2024-07-15 13:17:22.723165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.725237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.725274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:26.206 [2024-07-15 13:17:22.725289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:18:26.206 [2024-07-15 13:17:22.725300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.726660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.726696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:26.206 [2024-07-15 13:17:22.726711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:18:26.206 [2024-07-15 13:17:22.726721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.727904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.727942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.206 [2024-07-15 13:17:22.727958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:18:26.206 [2024-07-15 13:17:22.727968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.729133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.206 [2024-07-15 13:17:22.729185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.206 [2024-07-15 13:17:22.729201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:18:26.206 [2024-07-15 13:17:22.729211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.206 [2024-07-15 13:17:22.729250] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.206 [2024-07-15 13:17:22.729274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.206 [2024-07-15 13:17:22.729525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.729988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.730984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.207 [2024-07-15 13:17:22.731788] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.207 [2024-07-15 13:17:22.731800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:26.207 [2024-07-15 13:17:22.731812] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.207 [2024-07-15 13:17:22.731823] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.207 [2024-07-15 13:17:22.731834] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.207 [2024-07-15 13:17:22.731846] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.207 [2024-07-15 13:17:22.731857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.207 [2024-07-15 13:17:22.731887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.207 [2024-07-15 13:17:22.731899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.207 [2024-07-15 13:17:22.731909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.207 [2024-07-15 13:17:22.731918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.207 [2024-07-15 13:17:22.731931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.207 [2024-07-15 13:17:22.731943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.207 [2024-07-15 13:17:22.731966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:18:26.207 [2024-07-15 13:17:22.731983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.207 [2024-07-15 13:17:22.734176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.207 [2024-07-15 13:17:22.734207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.207 [2024-07-15 13:17:22.734221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:18:26.207 [2024-07-15 13:17:22.734240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.207 [2024-07-15 13:17:22.734385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.207 [2024-07-15 13:17:22.734401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.207 [2024-07-15 13:17:22.734413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:26.207 [2024-07-15 13:17:22.734425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.207 [2024-07-15 13:17:22.741780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.741838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.208 [2024-07-15 13:17:22.741865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.741878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.742005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.742024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.208 [2024-07-15 13:17:22.742037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.742048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.742126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.742170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.208 [2024-07-15 13:17:22.742187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.742199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.742235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.742250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.208 [2024-07-15 13:17:22.742262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.742274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.757446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.757525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.208 [2024-07-15 13:17:22.757547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.757572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.767876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.767954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.208 [2024-07-15 13:17:22.767976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.767989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.208 [2024-07-15 13:17:22.768113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.208 [2024-07-15 13:17:22.768229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.208 [2024-07-15 13:17:22.768382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.208 [2024-07-15 13:17:22.768483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.208 [2024-07-15 13:17:22.768587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.208 [2024-07-15 13:17:22.768680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.208 [2024-07-15 13:17:22.768692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.208 [2024-07-15 13:17:22.768716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.208 [2024-07-15 13:17:22.768901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.602 ms, result 0 00:18:26.466 00:18:26.466 00:18:26.466 13:17:23 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:26.466 13:17:23 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:27.030 13:17:23 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.030 [2024-07-15 13:17:23.764987] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:27.030 [2024-07-15 13:17:23.765242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89974 ] 00:18:27.288 [2024-07-15 13:17:23.916337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.288 [2024-07-15 13:17:24.019984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.546 [2024-07-15 13:17:24.149164] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.546 [2024-07-15 13:17:24.149259] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.804 [2024-07-15 13:17:24.304032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.804 [2024-07-15 13:17:24.304111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:27.804 [2024-07-15 13:17:24.304135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:27.804 [2024-07-15 13:17:24.304172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.804 [2024-07-15 13:17:24.307210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.804 [2024-07-15 13:17:24.307259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.804 [2024-07-15 13:17:24.307278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:18:27.804 [2024-07-15 13:17:24.307290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.804 [2024-07-15 13:17:24.307452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:27.804 [2024-07-15 13:17:24.307810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:27.804 [2024-07-15 13:17:24.307844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.804 [2024-07-15 13:17:24.307859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.804 [2024-07-15 13:17:24.307876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:18:27.804 [2024-07-15 13:17:24.307888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.804 [2024-07-15 13:17:24.310073] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:27.804 [2024-07-15 13:17:24.313176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.313223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:27.805 [2024-07-15 13:17:24.313258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:18:27.805 [2024-07-15 13:17:24.313271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.313442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.313479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:27.805 [2024-07-15 13:17:24.313499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:27.805 [2024-07-15 13:17:24.313518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.322234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.322295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.805 [2024-07-15 13:17:24.322314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.620 ms 00:18:27.805 [2024-07-15 13:17:24.322328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.322548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.322573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.805 [2024-07-15 13:17:24.322588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:27.805 [2024-07-15 13:17:24.322605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.322658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.322675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:27.805 [2024-07-15 13:17:24.322687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:27.805 [2024-07-15 13:17:24.322709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.322751] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:27.805 [2024-07-15 13:17:24.324923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.324962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.805 [2024-07-15 13:17:24.324985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:18:27.805 [2024-07-15 13:17:24.325007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.325083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.325101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:27.805 [2024-07-15 13:17:24.325114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:27.805 [2024-07-15 13:17:24.325125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.325180] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:27.805 [2024-07-15 13:17:24.325217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:27.805 [2024-07-15 13:17:24.325268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:27.805 [2024-07-15 13:17:24.325302] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:27.805 [2024-07-15 13:17:24.325416] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:27.805 [2024-07-15 13:17:24.325433] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:27.805 [2024-07-15 13:17:24.325448] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:27.805 [2024-07-15 13:17:24.325463] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:27.805 [2024-07-15 13:17:24.325476] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:27.805 [2024-07-15 13:17:24.325499] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:27.805 [2024-07-15 13:17:24.325511] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:27.805 [2024-07-15 13:17:24.325522] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:27.805 [2024-07-15 13:17:24.325537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:27.805 [2024-07-15 13:17:24.325550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.325561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:27.805 [2024-07-15 13:17:24.325574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:18:27.805 [2024-07-15 13:17:24.325584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.325687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.805 [2024-07-15 13:17:24.325704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:27.805 [2024-07-15 13:17:24.325716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:27.805 [2024-07-15 13:17:24.325727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.805 [2024-07-15 13:17:24.325851] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:27.805 [2024-07-15 13:17:24.325872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:27.805 [2024-07-15 13:17:24.325886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.805 [2024-07-15 13:17:24.325898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.805 [2024-07-15 13:17:24.325909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:27.805 [2024-07-15 13:17:24.325919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:27.805 [2024-07-15 13:17:24.325929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:27.805 [2024-07-15 13:17:24.325940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:27.805 [2024-07-15 13:17:24.325951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:27.805 [2024-07-15 13:17:24.325962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.805 [2024-07-15 13:17:24.325972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:27.805 [2024-07-15 13:17:24.325982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:27.805 [2024-07-15 13:17:24.325996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.805 [2024-07-15 13:17:24.326007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:27.805 [2024-07-15 13:17:24.326018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:27.805 [2024-07-15 13:17:24.326030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:27.805 [2024-07-15 13:17:24.326056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:27.805 [2024-07-15 13:17:24.326067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:27.805 [2024-07-15 13:17:24.326087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.805 [2024-07-15 13:17:24.326122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:27.805 [2024-07-15 13:17:24.326135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.805 [2024-07-15 13:17:24.326434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:27.805 [2024-07-15 13:17:24.326475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.805 [2024-07-15 13:17:24.326654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:27.805 [2024-07-15 13:17:24.326707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.805 [2024-07-15 13:17:24.326827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:27.805 [2024-07-15 13:17:24.326927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:27.805 [2024-07-15 13:17:24.326976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.805 [2024-07-15 13:17:24.327054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:27.805 [2024-07-15 13:17:24.327097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:27.806 [2024-07-15 13:17:24.327134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.806 [2024-07-15 13:17:24.327195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:27.806 [2024-07-15 13:17:24.327339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:27.806 [2024-07-15 13:17:24.327389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.806 [2024-07-15 13:17:24.327428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:27.806 [2024-07-15 13:17:24.327484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:27.806 [2024-07-15 13:17:24.327532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.806 [2024-07-15 13:17:24.327568] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:27.806 [2024-07-15 13:17:24.327611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:27.806 [2024-07-15 13:17:24.327651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.806 [2024-07-15 13:17:24.327701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.806 [2024-07-15 13:17:24.327741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:27.806 [2024-07-15 13:17:24.327778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:27.806 [2024-07-15 13:17:24.327815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:27.806 [2024-07-15 13:17:24.327851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:27.806 [2024-07-15 13:17:24.327905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:27.806 [2024-07-15 13:17:24.327942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:27.806 [2024-07-15 13:17:24.327980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:27.806 [2024-07-15 13:17:24.328038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:27.806 [2024-07-15 13:17:24.328136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:27.806 [2024-07-15 13:17:24.328162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:27.806 [2024-07-15 13:17:24.328176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:27.806 [2024-07-15 13:17:24.328188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:27.806 [2024-07-15 13:17:24.328204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:27.806 [2024-07-15 13:17:24.328216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:27.806 [2024-07-15 13:17:24.328228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:27.806 [2024-07-15 13:17:24.328239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:27.806 [2024-07-15 13:17:24.328250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:27.806 [2024-07-15 13:17:24.328318] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:27.806 [2024-07-15 13:17:24.328335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:27.806 [2024-07-15 13:17:24.328371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:27.806 [2024-07-15 13:17:24.328383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:27.806 [2024-07-15 13:17:24.328394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:27.806 [2024-07-15 13:17:24.328408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.328423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:27.806 [2024-07-15 13:17:24.328436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:18:27.806 [2024-07-15 13:17:24.328447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.356794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.357093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.806 [2024-07-15 13:17:24.357275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.219 ms 00:18:27.806 [2024-07-15 13:17:24.357439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.357783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.357960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:27.806 [2024-07-15 13:17:24.358128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:27.806 [2024-07-15 13:17:24.358226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.371286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.371512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.806 [2024-07-15 13:17:24.371686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.923 ms 00:18:27.806 [2024-07-15 13:17:24.371748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.371986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.372133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.806 [2024-07-15 13:17:24.372278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.806 [2024-07-15 13:17:24.372393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.373046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.373193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.806 [2024-07-15 13:17:24.373308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:18:27.806 [2024-07-15 13:17:24.373463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.373713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.373846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.806 [2024-07-15 13:17:24.373964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:18:27.806 [2024-07-15 13:17:24.374096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.382603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.382813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.806 [2024-07-15 13:17:24.382948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.391 ms 00:18:27.806 [2024-07-15 13:17:24.383004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.386296] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:27.806 [2024-07-15 13:17:24.386487] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:27.806 [2024-07-15 13:17:24.386634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.386787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:27.806 [2024-07-15 13:17:24.386842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:18:27.806 [2024-07-15 13:17:24.386944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.403284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.403495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:27.806 [2024-07-15 13:17:24.403534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.118 ms 00:18:27.806 [2024-07-15 13:17:24.403550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.406673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.806 [2024-07-15 13:17:24.406732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:27.806 [2024-07-15 13:17:24.406756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:18:27.806 [2024-07-15 13:17:24.406772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.806 [2024-07-15 13:17:24.408502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.408544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:27.807 [2024-07-15 13:17:24.408561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:18:27.807 [2024-07-15 13:17:24.408572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.409093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.409129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:27.807 [2024-07-15 13:17:24.409158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:27.807 [2024-07-15 13:17:24.409173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.432717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.432798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:27.807 [2024-07-15 13:17:24.432837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.506 ms 00:18:27.807 [2024-07-15 13:17:24.432850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.441875] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:27.807 [2024-07-15 13:17:24.464153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.464231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:27.807 [2024-07-15 13:17:24.464253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.102 ms 00:18:27.807 [2024-07-15 13:17:24.464283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.464428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.464450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:27.807 [2024-07-15 13:17:24.464469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:27.807 [2024-07-15 13:17:24.464491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.464570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.464587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:27.807 [2024-07-15 13:17:24.464600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:27.807 [2024-07-15 13:17:24.464612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.464649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.464665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:27.807 [2024-07-15 13:17:24.464693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:27.807 [2024-07-15 13:17:24.464710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.464766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:27.807 [2024-07-15 13:17:24.464793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.464805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:27.807 [2024-07-15 13:17:24.464820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:27.807 [2024-07-15 13:17:24.464833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.469204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.469255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:27.807 [2024-07-15 13:17:24.469274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:18:27.807 [2024-07-15 13:17:24.469294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.469394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.807 [2024-07-15 13:17:24.469414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:27.807 [2024-07-15 13:17:24.469428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:27.807 [2024-07-15 13:17:24.469440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.807 [2024-07-15 13:17:24.470791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:27.807 [2024-07-15 13:17:24.472057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.443 ms, result 0 00:18:27.807 [2024-07-15 13:17:24.472845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:27.807 [2024-07-15 13:17:24.480923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.066  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-15 13:17:24.645994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:28.066 [2024-07-15 13:17:24.647737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.647786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:28.066 [2024-07-15 13:17:24.647808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.066 [2024-07-15 13:17:24.647820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.647852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:28.066 [2024-07-15 13:17:24.648688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.648720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:28.066 [2024-07-15 13:17:24.648736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:18:28.066 [2024-07-15 13:17:24.648747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.650375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.650417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:28.066 [2024-07-15 13:17:24.650442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:18:28.066 [2024-07-15 13:17:24.650454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.654289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.654329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:28.066 [2024-07-15 13:17:24.654346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:18:28.066 [2024-07-15 13:17:24.654358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.661816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.661864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:28.066 [2024-07-15 13:17:24.661890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.412 ms 00:18:28.066 [2024-07-15 13:17:24.661902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.663866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.663909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:28.066 [2024-07-15 13:17:24.663925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:18:28.066 [2024-07-15 13:17:24.663937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.667448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.667513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:28.066 [2024-07-15 13:17:24.667530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.467 ms 00:18:28.066 [2024-07-15 13:17:24.667542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.066 [2024-07-15 13:17:24.667704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.066 [2024-07-15 13:17:24.667725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:28.066 [2024-07-15 13:17:24.667745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:28.067 [2024-07-15 13:17:24.667757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.067 [2024-07-15 13:17:24.669624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.067 [2024-07-15 13:17:24.669664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:28.067 [2024-07-15 13:17:24.669680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:18:28.067 [2024-07-15 13:17:24.669691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.067 [2024-07-15 13:17:24.671120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.067 [2024-07-15 13:17:24.671172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:28.067 [2024-07-15 13:17:24.671187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:18:28.067 [2024-07-15 13:17:24.671198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.067 [2024-07-15 13:17:24.672398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.067 [2024-07-15 13:17:24.672436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:28.067 [2024-07-15 13:17:24.672451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:18:28.067 [2024-07-15 13:17:24.672461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.067 [2024-07-15 13:17:24.673604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.067 [2024-07-15 13:17:24.673644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:28.067 [2024-07-15 13:17:24.673658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:18:28.067 [2024-07-15 13:17:24.673669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.067 [2024-07-15 13:17:24.673711] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:28.067 [2024-07-15 13:17:24.673736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.673992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:28.067 [2024-07-15 13:17:24.674600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.674990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.675002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:28.068 [2024-07-15 13:17:24.675024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:28.068 [2024-07-15 13:17:24.675036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:28.068 [2024-07-15 13:17:24.675048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:28.068 [2024-07-15 13:17:24.675059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:28.068 [2024-07-15 13:17:24.675071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:28.068 [2024-07-15 13:17:24.675083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:28.068 [2024-07-15 13:17:24.675106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:28.068 [2024-07-15 13:17:24.675122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:28.068 [2024-07-15 13:17:24.675486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:28.068 [2024-07-15 13:17:24.675555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:28.068 [2024-07-15 13:17:24.675595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:28.068 [2024-07-15 13:17:24.675635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.068 [2024-07-15 13:17:24.675753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:28.068 [2024-07-15 13:17:24.675873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:18:28.068 [2024-07-15 13:17:24.675935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.678176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.068 [2024-07-15 13:17:24.678315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:28.068 [2024-07-15 13:17:24.678425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:18:28.068 [2024-07-15 13:17:24.678494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.678661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.068 [2024-07-15 13:17:24.678709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:28.068 [2024-07-15 13:17:24.678810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:28.068 [2024-07-15 13:17:24.678859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.686480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.686773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:28.068 [2024-07-15 13:17:24.686906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.686957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.687112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.687220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:28.068 [2024-07-15 13:17:24.687291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.687340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.687468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.687583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:28.068 [2024-07-15 13:17:24.687603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.687623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.687659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.687682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:28.068 [2024-07-15 13:17:24.687695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.687706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.703985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.704318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:28.068 [2024-07-15 13:17:24.704440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.704503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.714772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:28.068 [2024-07-15 13:17:24.715098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.068 [2024-07-15 13:17:24.715265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.068 [2024-07-15 13:17:24.715370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.068 [2024-07-15 13:17:24.715517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:28.068 [2024-07-15 13:17:24.715621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.068 [2024-07-15 13:17:24.715711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.715801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.068 [2024-07-15 13:17:24.715820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.068 [2024-07-15 13:17:24.715833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.068 [2024-07-15 13:17:24.715856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.068 [2024-07-15 13:17:24.716036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.263 ms, result 0 00:18:28.326 00:18:28.326 00:18:28.326 13:17:25 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89988 00:18:28.326 13:17:25 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:28.326 13:17:25 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89988 00:18:28.326 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89988 ']' 00:18:28.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.326 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.326 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.326 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.327 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.327 13:17:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:28.585 [2024-07-15 13:17:25.116829] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:28.585 [2024-07-15 13:17:25.117415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89988 ] 00:18:28.585 [2024-07-15 13:17:25.260497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.842 [2024-07-15 13:17:25.358897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.408 13:17:26 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.408 13:17:26 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:29.408 13:17:26 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:29.666 [2024-07-15 13:17:26.254921] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.666 [2024-07-15 13:17:26.255021] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.927 [2024-07-15 13:17:26.424023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.424087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:29.927 [2024-07-15 13:17:26.424113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:29.927 [2024-07-15 13:17:26.424127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.426965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.427010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.927 [2024-07-15 13:17:26.427044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:18:29.927 [2024-07-15 13:17:26.427056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.427203] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:29.927 [2024-07-15 13:17:26.427522] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:29.927 [2024-07-15 13:17:26.427573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.427589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.927 [2024-07-15 13:17:26.427606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:18:29.927 [2024-07-15 13:17:26.427617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.429643] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:29.927 [2024-07-15 13:17:26.432594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.432645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:29.927 [2024-07-15 13:17:26.432673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:18:29.927 [2024-07-15 13:17:26.432689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.432794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.432818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:29.927 [2024-07-15 13:17:26.432848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:29.927 [2024-07-15 13:17:26.432866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.441529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.441718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.927 [2024-07-15 13:17:26.441846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.577 ms 00:18:29.927 [2024-07-15 13:17:26.441913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.442255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.442413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.927 [2024-07-15 13:17:26.442547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:29.927 [2024-07-15 13:17:26.442576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.442635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.442654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:29.927 [2024-07-15 13:17:26.442668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:29.927 [2024-07-15 13:17:26.442682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.442722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:29.927 [2024-07-15 13:17:26.444800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.444847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.927 [2024-07-15 13:17:26.444870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:18:29.927 [2024-07-15 13:17:26.444886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.444938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.444954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:29.927 [2024-07-15 13:17:26.444970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:29.927 [2024-07-15 13:17:26.444981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.445024] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:29.927 [2024-07-15 13:17:26.445065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:29.927 [2024-07-15 13:17:26.445122] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:29.927 [2024-07-15 13:17:26.445185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:29.927 [2024-07-15 13:17:26.445323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:29.927 [2024-07-15 13:17:26.445341] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:29.927 [2024-07-15 13:17:26.445366] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:29.927 [2024-07-15 13:17:26.445382] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:29.927 [2024-07-15 13:17:26.445410] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:29.927 [2024-07-15 13:17:26.445423] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:29.927 [2024-07-15 13:17:26.445440] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:29.927 [2024-07-15 13:17:26.445451] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:29.927 [2024-07-15 13:17:26.445465] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:29.927 [2024-07-15 13:17:26.445480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.445495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:29.927 [2024-07-15 13:17:26.445507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:18:29.927 [2024-07-15 13:17:26.445537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.445640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.445659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:29.927 [2024-07-15 13:17:26.445672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:29.927 [2024-07-15 13:17:26.445686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.445799] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:29.927 [2024-07-15 13:17:26.445824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:29.927 [2024-07-15 13:17:26.445847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.927 [2024-07-15 13:17:26.445862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.445874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:29.927 [2024-07-15 13:17:26.445890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.445901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:29.927 [2024-07-15 13:17:26.445915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:29.927 [2024-07-15 13:17:26.445926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:29.927 [2024-07-15 13:17:26.445939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.927 [2024-07-15 13:17:26.445950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:29.927 [2024-07-15 13:17:26.445963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:29.927 [2024-07-15 13:17:26.445973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.927 [2024-07-15 13:17:26.445986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:29.927 [2024-07-15 13:17:26.445997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:29.927 [2024-07-15 13:17:26.446010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:29.927 [2024-07-15 13:17:26.446035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:29.927 [2024-07-15 13:17:26.446071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:29.927 [2024-07-15 13:17:26.446124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:29.927 [2024-07-15 13:17:26.446192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:29.927 [2024-07-15 13:17:26.446237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:29.927 [2024-07-15 13:17:26.446271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.927 [2024-07-15 13:17:26.446295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:29.927 [2024-07-15 13:17:26.446308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:29.927 [2024-07-15 13:17:26.446319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.927 [2024-07-15 13:17:26.446334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:29.927 [2024-07-15 13:17:26.446345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:29.927 [2024-07-15 13:17:26.446358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:29.927 [2024-07-15 13:17:26.446381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:29.927 [2024-07-15 13:17:26.446392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446404] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:29.927 [2024-07-15 13:17:26.446416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:29.927 [2024-07-15 13:17:26.446430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.927 [2024-07-15 13:17:26.446455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:29.927 [2024-07-15 13:17:26.446466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:29.927 [2024-07-15 13:17:26.446480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:29.927 [2024-07-15 13:17:26.446492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:29.927 [2024-07-15 13:17:26.446506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:29.927 [2024-07-15 13:17:26.446517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:29.927 [2024-07-15 13:17:26.446536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:29.927 [2024-07-15 13:17:26.446551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:29.927 [2024-07-15 13:17:26.446579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:29.927 [2024-07-15 13:17:26.446593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:29.927 [2024-07-15 13:17:26.446605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:29.927 [2024-07-15 13:17:26.446619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:29.927 [2024-07-15 13:17:26.446631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:29.927 [2024-07-15 13:17:26.446645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:29.927 [2024-07-15 13:17:26.446657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:29.927 [2024-07-15 13:17:26.446671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:29.927 [2024-07-15 13:17:26.446683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:29.927 [2024-07-15 13:17:26.446750] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:29.927 [2024-07-15 13:17:26.446763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:29.927 [2024-07-15 13:17:26.446794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:29.927 [2024-07-15 13:17:26.446808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:29.927 [2024-07-15 13:17:26.446820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:29.927 [2024-07-15 13:17:26.446836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.446849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:29.927 [2024-07-15 13:17:26.446865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:18:29.927 [2024-07-15 13:17:26.446877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.462306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.462376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.927 [2024-07-15 13:17:26.462417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.309 ms 00:18:29.927 [2024-07-15 13:17:26.462431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.462644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.462667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.927 [2024-07-15 13:17:26.462688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:29.927 [2024-07-15 13:17:26.462700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.476449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.476516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.927 [2024-07-15 13:17:26.476541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.700 ms 00:18:29.927 [2024-07-15 13:17:26.476555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.476693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.476712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.927 [2024-07-15 13:17:26.476729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.927 [2024-07-15 13:17:26.476741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.927 [2024-07-15 13:17:26.477313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.927 [2024-07-15 13:17:26.477342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.927 [2024-07-15 13:17:26.477360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:18:29.927 [2024-07-15 13:17:26.477373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.477552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.477574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.928 [2024-07-15 13:17:26.477592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:29.928 [2024-07-15 13:17:26.477604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.486921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.486982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.928 [2024-07-15 13:17:26.487006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.279 ms 00:18:29.928 [2024-07-15 13:17:26.487019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.490194] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:29.928 [2024-07-15 13:17:26.490238] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:29.928 [2024-07-15 13:17:26.490262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.490276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:29.928 [2024-07-15 13:17:26.490296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:18:29.928 [2024-07-15 13:17:26.490308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.506120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.506199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:29.928 [2024-07-15 13:17:26.506224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.738 ms 00:18:29.928 [2024-07-15 13:17:26.506237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.508992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.509036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:29.928 [2024-07-15 13:17:26.509056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:18:29.928 [2024-07-15 13:17:26.509068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.510700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.510739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:29.928 [2024-07-15 13:17:26.510759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:18:29.928 [2024-07-15 13:17:26.510771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.511295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.511325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.928 [2024-07-15 13:17:26.511344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:29.928 [2024-07-15 13:17:26.511356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.545924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.546006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:29.928 [2024-07-15 13:17:26.546035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.525 ms 00:18:29.928 [2024-07-15 13:17:26.546048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.554644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:29.928 [2024-07-15 13:17:26.575581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.575672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.928 [2024-07-15 13:17:26.575696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.300 ms 00:18:29.928 [2024-07-15 13:17:26.575712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.575862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.575885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:29.928 [2024-07-15 13:17:26.575904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:29.928 [2024-07-15 13:17:26.575918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.575999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.576038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.928 [2024-07-15 13:17:26.576053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:29.928 [2024-07-15 13:17:26.576079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.576115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.576132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.928 [2024-07-15 13:17:26.576178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:29.928 [2024-07-15 13:17:26.576204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.576250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:29.928 [2024-07-15 13:17:26.576272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.576285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:29.928 [2024-07-15 13:17:26.576299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:29.928 [2024-07-15 13:17:26.576311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.580632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.580685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.928 [2024-07-15 13:17:26.580708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:18:29.928 [2024-07-15 13:17:26.580729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.580833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.928 [2024-07-15 13:17:26.580852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.928 [2024-07-15 13:17:26.580868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:29.928 [2024-07-15 13:17:26.580889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.928 [2024-07-15 13:17:26.582199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.928 [2024-07-15 13:17:26.583410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.820 ms, result 0 00:18:29.928 [2024-07-15 13:17:26.584402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:29.928 Some configs were skipped because the RPC state that can call them passed over. 00:18:29.928 13:17:26 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:30.184 [2024-07-15 13:17:26.873977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.184 [2024-07-15 13:17:26.874274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:30.184 [2024-07-15 13:17:26.874411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:18:30.184 [2024-07-15 13:17:26.874474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.184 [2024-07-15 13:17:26.874646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.213 ms, result 0 00:18:30.184 true 00:18:30.184 13:17:26 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:30.442 [2024-07-15 13:17:27.153806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.442 [2024-07-15 13:17:27.154068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:30.442 [2024-07-15 13:17:27.154106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:18:30.442 [2024-07-15 13:17:27.154135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.442 [2024-07-15 13:17:27.154226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.488 ms, result 0 00:18:30.442 true 00:18:30.442 13:17:27 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89988 00:18:30.442 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89988 ']' 00:18:30.442 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89988 00:18:30.442 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:30.700 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.700 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89988 00:18:30.700 killing process with pid 89988 00:18:30.701 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.701 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.701 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89988' 00:18:30.701 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89988 00:18:30.701 13:17:27 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89988 00:18:30.701 [2024-07-15 13:17:27.376661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.376753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.701 [2024-07-15 13:17:27.376777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.701 [2024-07-15 13:17:27.376791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.376855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:30.701 [2024-07-15 13:17:27.377685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.377713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.701 [2024-07-15 13:17:27.377730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:18:30.701 [2024-07-15 13:17:27.377745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.378078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.378105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.701 [2024-07-15 13:17:27.378135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:30.701 [2024-07-15 13:17:27.378160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.382303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.382355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.701 [2024-07-15 13:17:27.382376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:18:30.701 [2024-07-15 13:17:27.382389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.389662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.389722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:30.701 [2024-07-15 13:17:27.389743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.209 ms 00:18:30.701 [2024-07-15 13:17:27.389756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.391685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.391730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.701 [2024-07-15 13:17:27.391751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:18:30.701 [2024-07-15 13:17:27.391762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.395327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.395377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.701 [2024-07-15 13:17:27.395398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:18:30.701 [2024-07-15 13:17:27.395427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.395591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.395611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.701 [2024-07-15 13:17:27.395638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:30.701 [2024-07-15 13:17:27.395650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.397586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.397626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:30.701 [2024-07-15 13:17:27.397645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:18:30.701 [2024-07-15 13:17:27.397657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.399178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.399214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:30.701 [2024-07-15 13:17:27.399236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:18:30.701 [2024-07-15 13:17:27.399248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.400423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.400462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.701 [2024-07-15 13:17:27.400481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:18:30.701 [2024-07-15 13:17:27.400493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.401574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.701 [2024-07-15 13:17:27.401612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.701 [2024-07-15 13:17:27.401632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:30.701 [2024-07-15 13:17:27.401643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.701 [2024-07-15 13:17:27.401690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.701 [2024-07-15 13:17:27.401723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.401991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.402988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.701 [2024-07-15 13:17:27.403000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.702 [2024-07-15 13:17:27.403242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.702 [2024-07-15 13:17:27.403257] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:30.702 [2024-07-15 13:17:27.403270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.702 [2024-07-15 13:17:27.403289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.702 [2024-07-15 13:17:27.403300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.702 [2024-07-15 13:17:27.403314] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.702 [2024-07-15 13:17:27.403325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.702 [2024-07-15 13:17:27.403339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.702 [2024-07-15 13:17:27.403351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.702 [2024-07-15 13:17:27.403364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.702 [2024-07-15 13:17:27.403374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.702 [2024-07-15 13:17:27.403388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.702 [2024-07-15 13:17:27.403400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.702 [2024-07-15 13:17:27.403415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:18:30.702 [2024-07-15 13:17:27.403427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.405590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.702 [2024-07-15 13:17:27.405621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.702 [2024-07-15 13:17:27.405643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:18:30.702 [2024-07-15 13:17:27.405669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.405810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.702 [2024-07-15 13:17:27.405826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.702 [2024-07-15 13:17:27.405841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:30.702 [2024-07-15 13:17:27.405854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.414098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.702 [2024-07-15 13:17:27.414195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.702 [2024-07-15 13:17:27.414220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.702 [2024-07-15 13:17:27.414237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.414399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.702 [2024-07-15 13:17:27.414417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.702 [2024-07-15 13:17:27.414433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.702 [2024-07-15 13:17:27.414445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.414523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.702 [2024-07-15 13:17:27.414541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.702 [2024-07-15 13:17:27.414567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.702 [2024-07-15 13:17:27.414579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.414618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.702 [2024-07-15 13:17:27.414635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.702 [2024-07-15 13:17:27.414650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.702 [2024-07-15 13:17:27.414662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.702 [2024-07-15 13:17:27.432387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.702 [2024-07-15 13:17:27.432468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.702 [2024-07-15 13:17:27.432494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.702 [2024-07-15 13:17:27.432507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.443667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.443740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.960 [2024-07-15 13:17:27.443765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.443779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.443903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.443925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.960 [2024-07-15 13:17:27.443941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.443953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.444018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.960 [2024-07-15 13:17:27.444033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.444057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.444206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.960 [2024-07-15 13:17:27.444248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.444260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.444338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.960 [2024-07-15 13:17:27.444353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.444365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.444441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.960 [2024-07-15 13:17:27.444459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.444471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.960 [2024-07-15 13:17:27.444551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.960 [2024-07-15 13:17:27.444566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.960 [2024-07-15 13:17:27.444578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.960 [2024-07-15 13:17:27.444764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.064 ms, result 0 00:18:31.218 13:17:27 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.218 [2024-07-15 13:17:27.805816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:31.218 [2024-07-15 13:17:27.806001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90030 ] 00:18:31.218 [2024-07-15 13:17:27.947207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.476 [2024-07-15 13:17:28.045088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.476 [2024-07-15 13:17:28.170237] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:31.476 [2024-07-15 13:17:28.170334] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:31.735 [2024-07-15 13:17:28.323931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.324012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.735 [2024-07-15 13:17:28.324034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:31.735 [2024-07-15 13:17:28.324056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.326904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.326951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.735 [2024-07-15 13:17:28.326978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:18:31.735 [2024-07-15 13:17:28.326990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.327109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.735 [2024-07-15 13:17:28.327439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.735 [2024-07-15 13:17:28.327465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.327478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.735 [2024-07-15 13:17:28.327495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:31.735 [2024-07-15 13:17:28.327507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.329443] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:31.735 [2024-07-15 13:17:28.332326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.332383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:31.735 [2024-07-15 13:17:28.332402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:18:31.735 [2024-07-15 13:17:28.332414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.332524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.332553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:31.735 [2024-07-15 13:17:28.332567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:31.735 [2024-07-15 13:17:28.332583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.341254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.341330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.735 [2024-07-15 13:17:28.341350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.596 ms 00:18:31.735 [2024-07-15 13:17:28.341362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.341604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.341627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.735 [2024-07-15 13:17:28.341641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:31.735 [2024-07-15 13:17:28.341665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.341719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.341735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.735 [2024-07-15 13:17:28.341757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:31.735 [2024-07-15 13:17:28.341768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.341817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:31.735 [2024-07-15 13:17:28.343965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.344006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.735 [2024-07-15 13:17:28.344028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:18:31.735 [2024-07-15 13:17:28.344040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.344095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.344120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.735 [2024-07-15 13:17:28.344133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:31.735 [2024-07-15 13:17:28.344163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.344199] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:31.735 [2024-07-15 13:17:28.344230] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:31.735 [2024-07-15 13:17:28.344304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:31.735 [2024-07-15 13:17:28.344339] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:31.735 [2024-07-15 13:17:28.344479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:31.735 [2024-07-15 13:17:28.344503] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.735 [2024-07-15 13:17:28.344519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:31.735 [2024-07-15 13:17:28.344534] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.735 [2024-07-15 13:17:28.344547] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.735 [2024-07-15 13:17:28.344560] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:31.735 [2024-07-15 13:17:28.344571] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.735 [2024-07-15 13:17:28.344582] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:31.735 [2024-07-15 13:17:28.344607] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:31.735 [2024-07-15 13:17:28.344619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.344631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.735 [2024-07-15 13:17:28.344651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:18:31.735 [2024-07-15 13:17:28.344666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.344813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.735 [2024-07-15 13:17:28.344836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.735 [2024-07-15 13:17:28.344852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:31.735 [2024-07-15 13:17:28.344864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.735 [2024-07-15 13:17:28.344977] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.735 [2024-07-15 13:17:28.344996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.735 [2024-07-15 13:17:28.345008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.735 [2024-07-15 13:17:28.345019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.735 [2024-07-15 13:17:28.345043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.735 [2024-07-15 13:17:28.345054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.735 [2024-07-15 13:17:28.345064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:31.735 [2024-07-15 13:17:28.345075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.735 [2024-07-15 13:17:28.345085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:31.735 [2024-07-15 13:17:28.345095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.735 [2024-07-15 13:17:28.345105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.735 [2024-07-15 13:17:28.345116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:31.735 [2024-07-15 13:17:28.345129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.735 [2024-07-15 13:17:28.345141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.735 [2024-07-15 13:17:28.345167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:31.735 [2024-07-15 13:17:28.345178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.736 [2024-07-15 13:17:28.345199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.736 [2024-07-15 13:17:28.345231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.736 [2024-07-15 13:17:28.345264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.736 [2024-07-15 13:17:28.345298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.736 [2024-07-15 13:17:28.345337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.736 [2024-07-15 13:17:28.345368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.736 [2024-07-15 13:17:28.345389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.736 [2024-07-15 13:17:28.345399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:31.736 [2024-07-15 13:17:28.345409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.736 [2024-07-15 13:17:28.345419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:31.736 [2024-07-15 13:17:28.345429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:31.736 [2024-07-15 13:17:28.345439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:31.736 [2024-07-15 13:17:28.345459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:31.736 [2024-07-15 13:17:28.345469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345480] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.736 [2024-07-15 13:17:28.345494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.736 [2024-07-15 13:17:28.345507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.736 [2024-07-15 13:17:28.345538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.736 [2024-07-15 13:17:28.345549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.736 [2024-07-15 13:17:28.345560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.736 [2024-07-15 13:17:28.345570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.736 [2024-07-15 13:17:28.345580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.736 [2024-07-15 13:17:28.345590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.736 [2024-07-15 13:17:28.345604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.736 [2024-07-15 13:17:28.345619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:31.736 [2024-07-15 13:17:28.345644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:31.736 [2024-07-15 13:17:28.345656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:31.736 [2024-07-15 13:17:28.345668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:31.736 [2024-07-15 13:17:28.345679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:31.736 [2024-07-15 13:17:28.345694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:31.736 [2024-07-15 13:17:28.345706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:31.736 [2024-07-15 13:17:28.345717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:31.736 [2024-07-15 13:17:28.345729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:31.736 [2024-07-15 13:17:28.345740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:31.736 [2024-07-15 13:17:28.345798] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.736 [2024-07-15 13:17:28.345822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.736 [2024-07-15 13:17:28.345856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.736 [2024-07-15 13:17:28.345868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.736 [2024-07-15 13:17:28.345880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.736 [2024-07-15 13:17:28.345892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.345908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.736 [2024-07-15 13:17:28.345921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:18:31.736 [2024-07-15 13:17:28.345932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.370216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.370322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.736 [2024-07-15 13:17:28.370361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.166 ms 00:18:31.736 [2024-07-15 13:17:28.370397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.370770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.370809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:31.736 [2024-07-15 13:17:28.370836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:31.736 [2024-07-15 13:17:28.370858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.385709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.385786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.736 [2024-07-15 13:17:28.385811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.786 ms 00:18:31.736 [2024-07-15 13:17:28.385834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.386004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.386029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.736 [2024-07-15 13:17:28.386046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:31.736 [2024-07-15 13:17:28.386059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.386766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.386837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.736 [2024-07-15 13:17:28.386870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:18:31.736 [2024-07-15 13:17:28.386896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.387133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.387188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.736 [2024-07-15 13:17:28.387206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:31.736 [2024-07-15 13:17:28.387219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.396221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.396298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.736 [2024-07-15 13:17:28.396322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.958 ms 00:18:31.736 [2024-07-15 13:17:28.396338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.399901] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:31.736 [2024-07-15 13:17:28.399961] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:31.736 [2024-07-15 13:17:28.399991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.400007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:31.736 [2024-07-15 13:17:28.400026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:18:31.736 [2024-07-15 13:17:28.400040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.419856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.419974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:31.736 [2024-07-15 13:17:28.420003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.741 ms 00:18:31.736 [2024-07-15 13:17:28.420032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.423617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.423675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:31.736 [2024-07-15 13:17:28.423697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:18:31.736 [2024-07-15 13:17:28.423712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.736 [2024-07-15 13:17:28.425630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.736 [2024-07-15 13:17:28.425679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:31.736 [2024-07-15 13:17:28.425699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:18:31.736 [2024-07-15 13:17:28.425714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.737 [2024-07-15 13:17:28.426373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.737 [2024-07-15 13:17:28.426453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:31.737 [2024-07-15 13:17:28.426490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:18:31.737 [2024-07-15 13:17:28.426514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.737 [2024-07-15 13:17:28.451692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.737 [2024-07-15 13:17:28.451797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:31.737 [2024-07-15 13:17:28.451822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.115 ms 00:18:31.737 [2024-07-15 13:17:28.451837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.737 [2024-07-15 13:17:28.462763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:31.995 [2024-07-15 13:17:28.488083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.488207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.995 [2024-07-15 13:17:28.488236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.049 ms 00:18:31.995 [2024-07-15 13:17:28.488251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.488427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.488451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:31.995 [2024-07-15 13:17:28.488488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:31.995 [2024-07-15 13:17:28.488502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.488638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.488672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:31.995 [2024-07-15 13:17:28.488698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:31.995 [2024-07-15 13:17:28.488720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.488786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.488826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:31.995 [2024-07-15 13:17:28.488875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:31.995 [2024-07-15 13:17:28.488908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.488993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:31.995 [2024-07-15 13:17:28.489033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.489062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:31.995 [2024-07-15 13:17:28.489091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:31.995 [2024-07-15 13:17:28.489116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.494107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.494220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:31.995 [2024-07-15 13:17:28.494243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:18:31.995 [2024-07-15 13:17:28.494267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.494381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.995 [2024-07-15 13:17:28.494404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:31.995 [2024-07-15 13:17:28.494420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:31.995 [2024-07-15 13:17:28.494434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.995 [2024-07-15 13:17:28.495836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.995 [2024-07-15 13:17:28.497652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.506 ms, result 0 00:18:31.996 [2024-07-15 13:17:28.498724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:31.996 [2024-07-15 13:17:28.506548] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.091  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 75/256 [MB] (23 MBps) Copying: 99/256 [MB] (24 MBps) Copying: 123/256 [MB] (23 MBps) Copying: 146/256 [MB] (23 MBps) Copying: 169/256 [MB] (23 MBps) Copying: 193/256 [MB] (23 MBps) Copying: 217/256 [MB] (23 MBps) Copying: 240/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-15 13:17:39.584297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.091 [2024-07-15 13:17:39.586073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.586133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.091 [2024-07-15 13:17:39.586170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.091 [2024-07-15 13:17:39.586185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.586220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:43.091 [2024-07-15 13:17:39.587810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.587855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.091 [2024-07-15 13:17:39.587892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:18:43.091 [2024-07-15 13:17:39.587908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.588459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.588656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.091 [2024-07-15 13:17:39.588836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:18:43.091 [2024-07-15 13:17:39.588866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.592844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.592989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.091 [2024-07-15 13:17:39.593113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:18:43.091 [2024-07-15 13:17:39.593184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.600635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.600885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.091 [2024-07-15 13:17:39.601008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.256 ms 00:18:43.091 [2024-07-15 13:17:39.601074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.603194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.603361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.091 [2024-07-15 13:17:39.603498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:18:43.091 [2024-07-15 13:17:39.603550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.607482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.607704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.091 [2024-07-15 13:17:39.607834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.866 ms 00:18:43.091 [2024-07-15 13:17:39.607887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.608092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.608166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.091 [2024-07-15 13:17:39.608295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:43.091 [2024-07-15 13:17:39.608360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.610455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.610610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:43.091 [2024-07-15 13:17:39.610723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:18:43.091 [2024-07-15 13:17:39.610773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.612223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.612375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:43.091 [2024-07-15 13:17:39.612489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:18:43.091 [2024-07-15 13:17:39.612606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.613858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.614010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.091 [2024-07-15 13:17:39.614136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:18:43.091 [2024-07-15 13:17:39.614287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.616326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.091 [2024-07-15 13:17:39.616741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.091 [2024-07-15 13:17:39.616876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:18:43.091 [2024-07-15 13:17:39.616937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.091 [2024-07-15 13:17:39.617041] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.091 [2024-07-15 13:17:39.617195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.617595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.617844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-07-15 13:17:39.618449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.618992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.619989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.620990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-07-15 13:17:39.621169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.092 [2024-07-15 13:17:39.621190] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2b6391-d3a9-46a0-a156-2b8935f16a29 00:18:43.092 [2024-07-15 13:17:39.621208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.092 [2024-07-15 13:17:39.621223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.092 [2024-07-15 13:17:39.621239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.092 [2024-07-15 13:17:39.621255] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.092 [2024-07-15 13:17:39.621271] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.092 [2024-07-15 13:17:39.621297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.092 [2024-07-15 13:17:39.621313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.092 [2024-07-15 13:17:39.621327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.092 [2024-07-15 13:17:39.621342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.092 [2024-07-15 13:17:39.621360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-07-15 13:17:39.621377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.092 [2024-07-15 13:17:39.621395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:18:43.092 [2024-07-15 13:17:39.621421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-07-15 13:17:39.623780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-07-15 13:17:39.623824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.092 [2024-07-15 13:17:39.623844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:18:43.092 [2024-07-15 13:17:39.623869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-07-15 13:17:39.624024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-07-15 13:17:39.624045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.092 [2024-07-15 13:17:39.624063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:43.092 [2024-07-15 13:17:39.624084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-07-15 13:17:39.631997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.632091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.093 [2024-07-15 13:17:39.632125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.632138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.632291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.632311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.093 [2024-07-15 13:17:39.632335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.632347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.632427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.632445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.093 [2024-07-15 13:17:39.632458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.632470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.632503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.632517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.093 [2024-07-15 13:17:39.632529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.632540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.651068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.651171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.093 [2024-07-15 13:17:39.651194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.651220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.661927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.093 [2024-07-15 13:17:39.662028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.093 [2024-07-15 13:17:39.662213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.093 [2024-07-15 13:17:39.662300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.093 [2024-07-15 13:17:39.662447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.093 [2024-07-15 13:17:39.662547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.093 [2024-07-15 13:17:39.662655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.093 [2024-07-15 13:17:39.662750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.093 [2024-07-15 13:17:39.662763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.093 [2024-07-15 13:17:39.662788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.093 [2024-07-15 13:17:39.662970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.865 ms, result 0 00:18:43.351 00:18:43.351 00:18:43.351 13:17:39 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:43.991 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:43.991 13:17:40 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89988 00:18:43.991 13:17:40 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89988 ']' 00:18:43.991 13:17:40 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89988 00:18:43.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89988) - No such process 00:18:43.991 Process with pid 89988 is not found 00:18:43.991 13:17:40 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 89988 is not found' 00:18:43.991 00:18:43.991 real 0m56.900s 00:18:43.991 user 1m16.851s 00:18:43.991 sys 0m7.131s 00:18:43.991 ************************************ 00:18:43.991 END TEST ftl_trim 00:18:43.991 ************************************ 00:18:43.991 13:17:40 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.991 13:17:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:43.991 13:17:40 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:43.991 13:17:40 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:43.991 13:17:40 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.991 13:17:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:43.991 ************************************ 00:18:43.991 START TEST ftl_restore 00:18:43.991 ************************************ 00:18:43.991 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:43.991 * Looking for test storage... 00:18:43.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.991 13:17:40 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.249 13:17:40 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.ANPyoaztNt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90220 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90220 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 90220 ']' 00:18:44.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.250 13:17:40 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.250 13:17:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:44.250 [2024-07-15 13:17:40.864258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.250 [2024-07-15 13:17:40.864459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90220 ] 00:18:44.507 [2024-07-15 13:17:41.014282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.507 [2024-07-15 13:17:41.113453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.441 13:17:41 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.441 13:17:41 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:45.441 13:17:41 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:45.699 13:17:42 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:45.699 13:17:42 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:45.699 13:17:42 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:45.699 { 00:18:45.699 "name": "nvme0n1", 00:18:45.699 "aliases": [ 00:18:45.699 "689a41fb-11bb-4685-8e3d-10738e7b0b6a" 00:18:45.699 ], 00:18:45.699 "product_name": "NVMe disk", 00:18:45.699 "block_size": 4096, 00:18:45.699 "num_blocks": 1310720, 00:18:45.699 "uuid": "689a41fb-11bb-4685-8e3d-10738e7b0b6a", 00:18:45.699 "assigned_rate_limits": { 00:18:45.699 "rw_ios_per_sec": 0, 00:18:45.699 "rw_mbytes_per_sec": 0, 00:18:45.699 "r_mbytes_per_sec": 0, 00:18:45.699 "w_mbytes_per_sec": 0 00:18:45.699 }, 00:18:45.699 "claimed": true, 00:18:45.699 "claim_type": "read_many_write_one", 00:18:45.699 "zoned": false, 00:18:45.699 "supported_io_types": { 00:18:45.699 "read": true, 00:18:45.699 "write": true, 00:18:45.699 "unmap": true, 00:18:45.699 "write_zeroes": true, 00:18:45.699 "flush": true, 00:18:45.699 "reset": true, 00:18:45.699 "compare": true, 00:18:45.699 "compare_and_write": false, 00:18:45.699 "abort": true, 00:18:45.699 "nvme_admin": true, 00:18:45.699 "nvme_io": true 00:18:45.699 }, 00:18:45.699 "driver_specific": { 00:18:45.699 "nvme": [ 00:18:45.699 { 00:18:45.699 "pci_address": "0000:00:11.0", 00:18:45.699 "trid": { 00:18:45.699 "trtype": "PCIe", 00:18:45.699 "traddr": "0000:00:11.0" 00:18:45.699 }, 00:18:45.699 "ctrlr_data": { 00:18:45.699 "cntlid": 0, 00:18:45.699 "vendor_id": "0x1b36", 00:18:45.699 "model_number": "QEMU NVMe Ctrl", 00:18:45.699 "serial_number": "12341", 00:18:45.699 "firmware_revision": "8.0.0", 00:18:45.699 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:45.699 "oacs": { 00:18:45.699 "security": 0, 00:18:45.699 "format": 1, 00:18:45.699 "firmware": 0, 00:18:45.699 "ns_manage": 1 00:18:45.699 }, 00:18:45.699 "multi_ctrlr": false, 00:18:45.699 "ana_reporting": false 00:18:45.699 }, 00:18:45.699 "vs": { 00:18:45.699 "nvme_version": "1.4" 00:18:45.699 }, 00:18:45.699 "ns_data": { 00:18:45.699 "id": 1, 00:18:45.699 "can_share": false 00:18:45.699 } 00:18:45.699 } 00:18:45.699 ], 00:18:45.699 "mp_policy": "active_passive" 00:18:45.699 } 00:18:45.699 } 00:18:45.699 ]' 00:18:45.699 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:45.957 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:45.957 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:45.957 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:18:45.957 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:18:45.957 13:17:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:18:45.957 13:17:42 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:45.957 13:17:42 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:45.957 13:17:42 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:45.957 13:17:42 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:45.957 13:17:42 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:46.216 13:17:42 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=39e9969d-ed9c-4873-93d3-86e1b06c57d1 00:18:46.216 13:17:42 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:46.216 13:17:42 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39e9969d-ed9c-4873-93d3-86e1b06c57d1 00:18:46.473 13:17:43 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:46.730 13:17:43 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=4ab0be0c-3459-4fc8-8870-e87de6e03d41 00:18:46.730 13:17:43 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4ab0be0c-3459-4fc8-8870-e87de6e03d41 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=78ec1214-7d41-4920-a5a1-8a024831c214 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=78ec1214-7d41-4920-a5a1-8a024831c214 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:46.987 13:17:43 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:46.987 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=78ec1214-7d41-4920-a5a1-8a024831c214 00:18:46.987 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:46.987 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:46.987 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:46.987 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:47.245 { 00:18:47.245 "name": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:47.245 "aliases": [ 00:18:47.245 "lvs/nvme0n1p0" 00:18:47.245 ], 00:18:47.245 "product_name": "Logical Volume", 00:18:47.245 "block_size": 4096, 00:18:47.245 "num_blocks": 26476544, 00:18:47.245 "uuid": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:47.245 "assigned_rate_limits": { 00:18:47.245 "rw_ios_per_sec": 0, 00:18:47.245 "rw_mbytes_per_sec": 0, 00:18:47.245 "r_mbytes_per_sec": 0, 00:18:47.245 "w_mbytes_per_sec": 0 00:18:47.245 }, 00:18:47.245 "claimed": false, 00:18:47.245 "zoned": false, 00:18:47.245 "supported_io_types": { 00:18:47.245 "read": true, 00:18:47.245 "write": true, 00:18:47.245 "unmap": true, 00:18:47.245 "write_zeroes": true, 00:18:47.245 "flush": false, 00:18:47.245 "reset": true, 00:18:47.245 "compare": false, 00:18:47.245 "compare_and_write": false, 00:18:47.245 "abort": false, 00:18:47.245 "nvme_admin": false, 00:18:47.245 "nvme_io": false 00:18:47.245 }, 00:18:47.245 "driver_specific": { 00:18:47.245 "lvol": { 00:18:47.245 "lvol_store_uuid": "4ab0be0c-3459-4fc8-8870-e87de6e03d41", 00:18:47.245 "base_bdev": "nvme0n1", 00:18:47.245 "thin_provision": true, 00:18:47.245 "num_allocated_clusters": 0, 00:18:47.245 "snapshot": false, 00:18:47.245 "clone": false, 00:18:47.245 "esnap_clone": false 00:18:47.245 } 00:18:47.245 } 00:18:47.245 } 00:18:47.245 ]' 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:47.245 13:17:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:47.245 13:17:43 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:47.245 13:17:43 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:47.245 13:17:43 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:47.820 13:17:44 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:47.820 13:17:44 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:47.820 13:17:44 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=78ec1214-7d41-4920-a5a1-8a024831c214 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:47.820 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:47.820 { 00:18:47.820 "name": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:47.820 "aliases": [ 00:18:47.820 "lvs/nvme0n1p0" 00:18:47.820 ], 00:18:47.820 "product_name": "Logical Volume", 00:18:47.820 "block_size": 4096, 00:18:47.820 "num_blocks": 26476544, 00:18:47.820 "uuid": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:47.820 "assigned_rate_limits": { 00:18:47.820 "rw_ios_per_sec": 0, 00:18:47.820 "rw_mbytes_per_sec": 0, 00:18:47.820 "r_mbytes_per_sec": 0, 00:18:47.820 "w_mbytes_per_sec": 0 00:18:47.820 }, 00:18:47.820 "claimed": false, 00:18:47.820 "zoned": false, 00:18:47.820 "supported_io_types": { 00:18:47.820 "read": true, 00:18:47.820 "write": true, 00:18:47.820 "unmap": true, 00:18:47.820 "write_zeroes": true, 00:18:47.820 "flush": false, 00:18:47.820 "reset": true, 00:18:47.820 "compare": false, 00:18:47.820 "compare_and_write": false, 00:18:47.820 "abort": false, 00:18:47.820 "nvme_admin": false, 00:18:47.820 "nvme_io": false 00:18:47.820 }, 00:18:47.820 "driver_specific": { 00:18:47.820 "lvol": { 00:18:47.820 "lvol_store_uuid": "4ab0be0c-3459-4fc8-8870-e87de6e03d41", 00:18:47.820 "base_bdev": "nvme0n1", 00:18:47.820 "thin_provision": true, 00:18:47.821 "num_allocated_clusters": 0, 00:18:47.821 "snapshot": false, 00:18:47.821 "clone": false, 00:18:47.821 "esnap_clone": false 00:18:47.821 } 00:18:47.821 } 00:18:47.821 } 00:18:47.821 ]' 00:18:47.821 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:48.078 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:48.078 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:48.078 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:48.078 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:48.078 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:48.078 13:17:44 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:48.078 13:17:44 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:48.335 13:17:44 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:48.335 13:17:44 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:48.335 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=78ec1214-7d41-4920-a5a1-8a024831c214 00:18:48.335 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:48.335 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:48.335 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:48.335 13:17:44 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78ec1214-7d41-4920-a5a1-8a024831c214 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:48.605 { 00:18:48.605 "name": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:48.605 "aliases": [ 00:18:48.605 "lvs/nvme0n1p0" 00:18:48.605 ], 00:18:48.605 "product_name": "Logical Volume", 00:18:48.605 "block_size": 4096, 00:18:48.605 "num_blocks": 26476544, 00:18:48.605 "uuid": "78ec1214-7d41-4920-a5a1-8a024831c214", 00:18:48.605 "assigned_rate_limits": { 00:18:48.605 "rw_ios_per_sec": 0, 00:18:48.605 "rw_mbytes_per_sec": 0, 00:18:48.605 "r_mbytes_per_sec": 0, 00:18:48.605 "w_mbytes_per_sec": 0 00:18:48.605 }, 00:18:48.605 "claimed": false, 00:18:48.605 "zoned": false, 00:18:48.605 "supported_io_types": { 00:18:48.605 "read": true, 00:18:48.605 "write": true, 00:18:48.605 "unmap": true, 00:18:48.605 "write_zeroes": true, 00:18:48.605 "flush": false, 00:18:48.605 "reset": true, 00:18:48.605 "compare": false, 00:18:48.605 "compare_and_write": false, 00:18:48.605 "abort": false, 00:18:48.605 "nvme_admin": false, 00:18:48.605 "nvme_io": false 00:18:48.605 }, 00:18:48.605 "driver_specific": { 00:18:48.605 "lvol": { 00:18:48.605 "lvol_store_uuid": "4ab0be0c-3459-4fc8-8870-e87de6e03d41", 00:18:48.605 "base_bdev": "nvme0n1", 00:18:48.605 "thin_provision": true, 00:18:48.605 "num_allocated_clusters": 0, 00:18:48.605 "snapshot": false, 00:18:48.605 "clone": false, 00:18:48.605 "esnap_clone": false 00:18:48.605 } 00:18:48.605 } 00:18:48.605 } 00:18:48.605 ]' 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:48.605 13:17:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 78ec1214-7d41-4920-a5a1-8a024831c214 --l2p_dram_limit 10' 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:48.605 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:48.605 13:17:45 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 78ec1214-7d41-4920-a5a1-8a024831c214 --l2p_dram_limit 10 -c nvc0n1p0 00:18:48.865 [2024-07-15 13:17:45.419703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.419775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:48.865 [2024-07-15 13:17:45.419820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.865 [2024-07-15 13:17:45.419834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.419939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.419962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.865 [2024-07-15 13:17:45.419979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:48.865 [2024-07-15 13:17:45.420019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.420066] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:48.865 [2024-07-15 13:17:45.420482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:48.865 [2024-07-15 13:17:45.420515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.420532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.865 [2024-07-15 13:17:45.420548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:18:48.865 [2024-07-15 13:17:45.420560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.420787] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 48704cf3-47fc-45de-bbc6-7542cac85d09 00:18:48.865 [2024-07-15 13:17:45.422627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.422674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:48.865 [2024-07-15 13:17:45.422693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:48.865 [2024-07-15 13:17:45.422712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.432379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.432446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.865 [2024-07-15 13:17:45.432466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.603 ms 00:18:48.865 [2024-07-15 13:17:45.432482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.432622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.432655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.865 [2024-07-15 13:17:45.432670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:48.865 [2024-07-15 13:17:45.432685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.432795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.432820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:48.865 [2024-07-15 13:17:45.432834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:48.865 [2024-07-15 13:17:45.432849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.432886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.865 [2024-07-15 13:17:45.435239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.435280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.865 [2024-07-15 13:17:45.435299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 00:18:48.865 [2024-07-15 13:17:45.435322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.435377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.435394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:48.865 [2024-07-15 13:17:45.435410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:48.865 [2024-07-15 13:17:45.435422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.435456] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:48.865 [2024-07-15 13:17:45.435658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:48.865 [2024-07-15 13:17:45.435686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:48.865 [2024-07-15 13:17:45.435711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:48.865 [2024-07-15 13:17:45.435731] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:48.865 [2024-07-15 13:17:45.435745] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:48.865 [2024-07-15 13:17:45.435760] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:48.865 [2024-07-15 13:17:45.435772] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:48.865 [2024-07-15 13:17:45.435798] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:48.865 [2024-07-15 13:17:45.435810] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:48.865 [2024-07-15 13:17:45.435826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.435838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:48.865 [2024-07-15 13:17:45.435852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:18:48.865 [2024-07-15 13:17:45.435865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.435972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.865 [2024-07-15 13:17:45.435989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:48.865 [2024-07-15 13:17:45.436008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:48.865 [2024-07-15 13:17:45.436021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.865 [2024-07-15 13:17:45.436139] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:48.865 [2024-07-15 13:17:45.436398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:48.865 [2024-07-15 13:17:45.436452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.865 [2024-07-15 13:17:45.436496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.865 [2024-07-15 13:17:45.436554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:48.865 [2024-07-15 13:17:45.436719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:48.865 [2024-07-15 13:17:45.436813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:48.865 [2024-07-15 13:17:45.436874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:48.865 [2024-07-15 13:17:45.436919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:48.865 [2024-07-15 13:17:45.436961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.865 [2024-07-15 13:17:45.437004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:48.865 [2024-07-15 13:17:45.437044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:48.865 [2024-07-15 13:17:45.437087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.865 [2024-07-15 13:17:45.437127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:48.865 [2024-07-15 13:17:45.437243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:48.865 [2024-07-15 13:17:45.437297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:48.865 [2024-07-15 13:17:45.437423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:48.865 [2024-07-15 13:17:45.437467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:48.865 [2024-07-15 13:17:45.437550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.865 [2024-07-15 13:17:45.437610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:48.865 [2024-07-15 13:17:45.437623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.865 [2024-07-15 13:17:45.437653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:48.865 [2024-07-15 13:17:45.437667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.865 [2024-07-15 13:17:45.437691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:48.865 [2024-07-15 13:17:45.437702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:48.865 [2024-07-15 13:17:45.437719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.865 [2024-07-15 13:17:45.437730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:48.866 [2024-07-15 13:17:45.437744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:48.866 [2024-07-15 13:17:45.437755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.866 [2024-07-15 13:17:45.437768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:48.866 [2024-07-15 13:17:45.437780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:48.866 [2024-07-15 13:17:45.437794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.866 [2024-07-15 13:17:45.437805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:48.866 [2024-07-15 13:17:45.437818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:48.866 [2024-07-15 13:17:45.437829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.866 [2024-07-15 13:17:45.437843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:48.866 [2024-07-15 13:17:45.437854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:48.866 [2024-07-15 13:17:45.437867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.866 [2024-07-15 13:17:45.437878] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:48.866 [2024-07-15 13:17:45.437892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:48.866 [2024-07-15 13:17:45.437905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.866 [2024-07-15 13:17:45.437932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.866 [2024-07-15 13:17:45.437957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:48.866 [2024-07-15 13:17:45.437971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:48.866 [2024-07-15 13:17:45.437983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:48.866 [2024-07-15 13:17:45.437997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:48.866 [2024-07-15 13:17:45.438008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:48.866 [2024-07-15 13:17:45.438022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:48.866 [2024-07-15 13:17:45.438039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:48.866 [2024-07-15 13:17:45.438066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:48.866 [2024-07-15 13:17:45.438110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:48.866 [2024-07-15 13:17:45.438124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:48.866 [2024-07-15 13:17:45.438166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:48.866 [2024-07-15 13:17:45.438181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:48.866 [2024-07-15 13:17:45.438196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:48.866 [2024-07-15 13:17:45.438209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:48.866 [2024-07-15 13:17:45.438226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:48.866 [2024-07-15 13:17:45.438238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:48.866 [2024-07-15 13:17:45.438254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:48.866 [2024-07-15 13:17:45.438321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:48.866 [2024-07-15 13:17:45.438337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:48.866 [2024-07-15 13:17:45.438376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:48.866 [2024-07-15 13:17:45.438389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:48.866 [2024-07-15 13:17:45.438403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:48.866 [2024-07-15 13:17:45.438418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.866 [2024-07-15 13:17:45.438433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:48.866 [2024-07-15 13:17:45.438446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:18:48.866 [2024-07-15 13:17:45.438464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.866 [2024-07-15 13:17:45.438574] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:48.866 [2024-07-15 13:17:45.438600] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:51.392 [2024-07-15 13:17:47.853301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.853385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:51.392 [2024-07-15 13:17:47.853409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2414.733 ms 00:18:51.392 [2024-07-15 13:17:47.853426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.868443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.868516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.392 [2024-07-15 13:17:47.868549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.857 ms 00:18:51.392 [2024-07-15 13:17:47.868566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.868708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.868739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.392 [2024-07-15 13:17:47.868754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:51.392 [2024-07-15 13:17:47.868769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.882480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.882550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.392 [2024-07-15 13:17:47.882572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.615 ms 00:18:51.392 [2024-07-15 13:17:47.882599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.882670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.882691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.392 [2024-07-15 13:17:47.882706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.392 [2024-07-15 13:17:47.882721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.883392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.883417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.392 [2024-07-15 13:17:47.883432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:18:51.392 [2024-07-15 13:17:47.883446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.883616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.883645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.392 [2024-07-15 13:17:47.883658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:51.392 [2024-07-15 13:17:47.883672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.893117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.893196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.392 [2024-07-15 13:17:47.893228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.413 ms 00:18:51.392 [2024-07-15 13:17:47.893245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.904499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:51.392 [2024-07-15 13:17:47.908798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.908850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.392 [2024-07-15 13:17:47.908874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.401 ms 00:18:51.392 [2024-07-15 13:17:47.908901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.979966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.980050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:51.392 [2024-07-15 13:17:47.980094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.988 ms 00:18:51.392 [2024-07-15 13:17:47.980112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.980399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.980423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.392 [2024-07-15 13:17:47.980441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:18:51.392 [2024-07-15 13:17:47.980453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.984428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.984472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:51.392 [2024-07-15 13:17:47.984513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:18:51.392 [2024-07-15 13:17:47.984544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.987840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.987881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:51.392 [2024-07-15 13:17:47.987919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:18:51.392 [2024-07-15 13:17:47.987931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:47.988429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:47.988456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.392 [2024-07-15 13:17:47.988475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:18:51.392 [2024-07-15 13:17:47.988499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.027077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.027205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:51.392 [2024-07-15 13:17:48.027235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.529 ms 00:18:51.392 [2024-07-15 13:17:48.027254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.033169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.033246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:51.392 [2024-07-15 13:17:48.033288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:18:51.392 [2024-07-15 13:17:48.033302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.037338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.037384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:51.392 [2024-07-15 13:17:48.037423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:18:51.392 [2024-07-15 13:17:48.037435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.041692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.041739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.392 [2024-07-15 13:17:48.041779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.200 ms 00:18:51.392 [2024-07-15 13:17:48.041791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.041863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.041885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.392 [2024-07-15 13:17:48.041903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:51.392 [2024-07-15 13:17:48.041915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.042006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.392 [2024-07-15 13:17:48.042024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.392 [2024-07-15 13:17:48.042042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:51.392 [2024-07-15 13:17:48.042054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.392 [2024-07-15 13:17:48.043478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2623.180 ms, result 0 00:18:51.392 { 00:18:51.392 "name": "ftl0", 00:18:51.392 "uuid": "48704cf3-47fc-45de-bbc6-7542cac85d09" 00:18:51.392 } 00:18:51.392 13:17:48 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:51.392 13:17:48 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:51.649 13:17:48 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:51.649 13:17:48 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:51.907 [2024-07-15 13:17:48.617624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.617727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:51.907 [2024-07-15 13:17:48.617752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.907 [2024-07-15 13:17:48.617771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.617812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:51.907 [2024-07-15 13:17:48.618737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.618774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:51.907 [2024-07-15 13:17:48.618799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:18:51.907 [2024-07-15 13:17:48.618812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.619117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.619136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:51.907 [2024-07-15 13:17:48.619172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:18:51.907 [2024-07-15 13:17:48.619187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.622472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.622504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:51.907 [2024-07-15 13:17:48.622523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:18:51.907 [2024-07-15 13:17:48.622545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.628994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.629031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:51.907 [2024-07-15 13:17:48.629068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.416 ms 00:18:51.907 [2024-07-15 13:17:48.629080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.631000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.907 [2024-07-15 13:17:48.631182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:51.907 [2024-07-15 13:17:48.631220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:18:51.907 [2024-07-15 13:17:48.631234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.907 [2024-07-15 13:17:48.636261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.636452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:51.908 [2024-07-15 13:17:48.636584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:18:51.908 [2024-07-15 13:17:48.636723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.636931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.637002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:51.908 [2024-07-15 13:17:48.637138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:51.908 [2024-07-15 13:17:48.637288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.639335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.639492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:51.908 [2024-07-15 13:17:48.639630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.968 ms 00:18:51.908 [2024-07-15 13:17:48.639766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.641276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.641423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:51.908 [2024-07-15 13:17:48.641550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:18:51.908 [2024-07-15 13:17:48.641675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.642909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.643058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:51.908 [2024-07-15 13:17:48.643195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:18:51.908 [2024-07-15 13:17:48.643312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.644495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.908 [2024-07-15 13:17:48.644644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:51.908 [2024-07-15 13:17:48.644768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:18:51.908 [2024-07-15 13:17:48.644876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.908 [2024-07-15 13:17:48.645012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:51.908 [2024-07-15 13:17:48.645044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:51.908 [2024-07-15 13:17:48.645063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:51.908 [2024-07-15 13:17:48.645077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:51.908 [2024-07-15 13:17:48.645107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:51.908 [2024-07-15 13:17:48.645120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.645999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:52.167 [2024-07-15 13:17:48.646715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:52.167 [2024-07-15 13:17:48.646734] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48704cf3-47fc-45de-bbc6-7542cac85d09 00:18:52.167 [2024-07-15 13:17:48.646747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:52.167 [2024-07-15 13:17:48.646763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:52.167 [2024-07-15 13:17:48.646775] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:52.167 [2024-07-15 13:17:48.646796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:52.167 [2024-07-15 13:17:48.646808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:52.167 [2024-07-15 13:17:48.646823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:52.167 [2024-07-15 13:17:48.646838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:52.167 [2024-07-15 13:17:48.646851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:52.167 [2024-07-15 13:17:48.646862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:52.167 [2024-07-15 13:17:48.646877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.167 [2024-07-15 13:17:48.646890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:52.167 [2024-07-15 13:17:48.646906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:18:52.167 [2024-07-15 13:17:48.646918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.649095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.167 [2024-07-15 13:17:48.649119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:52.167 [2024-07-15 13:17:48.649140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:18:52.167 [2024-07-15 13:17:48.649459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.649673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.167 [2024-07-15 13:17:48.649728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:52.167 [2024-07-15 13:17:48.649839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:52.167 [2024-07-15 13:17:48.649892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.658330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.658390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.167 [2024-07-15 13:17:48.658414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.658430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.658524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.658541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.167 [2024-07-15 13:17:48.658558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.658571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.658698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.658719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.167 [2024-07-15 13:17:48.658750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.658763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.658799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.658814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.167 [2024-07-15 13:17:48.658829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.658841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.674761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.674835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.167 [2024-07-15 13:17:48.674878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.674891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.685326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.685401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.167 [2024-07-15 13:17:48.685426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.685440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.685570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.685591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.167 [2024-07-15 13:17:48.685611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.685624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.685702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.685725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.167 [2024-07-15 13:17:48.685741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.685754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.685865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.685886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.167 [2024-07-15 13:17:48.685902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.685915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.685974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.685994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:52.167 [2024-07-15 13:17:48.686013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.686026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.686089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.686117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.167 [2024-07-15 13:17:48.686192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.686211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.686285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.167 [2024-07-15 13:17:48.686306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.167 [2024-07-15 13:17:48.686322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.167 [2024-07-15 13:17:48.686335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.167 [2024-07-15 13:17:48.686513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.839 ms, result 0 00:18:52.167 true 00:18:52.167 13:17:48 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90220 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90220 ']' 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90220 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90220 00:18:52.167 killing process with pid 90220 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90220' 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 90220 00:18:52.167 13:17:48 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 90220 00:18:55.506 13:17:52 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:00.765 262144+0 records in 00:19:00.765 262144+0 records out 00:19:00.765 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.66709 s, 230 MB/s 00:19:00.765 13:17:56 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:02.668 13:17:58 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.668 [2024-07-15 13:17:59.006628] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:02.668 [2024-07-15 13:17:59.006824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90439 ] 00:19:02.668 [2024-07-15 13:17:59.153620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.668 [2024-07-15 13:17:59.255075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.668 [2024-07-15 13:17:59.387087] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.668 [2024-07-15 13:17:59.387230] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.927 [2024-07-15 13:17:59.541420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.927 [2024-07-15 13:17:59.541500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.927 [2024-07-15 13:17:59.541523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.927 [2024-07-15 13:17:59.541536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.927 [2024-07-15 13:17:59.541621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.927 [2024-07-15 13:17:59.541644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.928 [2024-07-15 13:17:59.541657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:02.928 [2024-07-15 13:17:59.541675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.541714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.928 [2024-07-15 13:17:59.542032] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.928 [2024-07-15 13:17:59.542059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.542085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.928 [2024-07-15 13:17:59.542099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:02.928 [2024-07-15 13:17:59.542110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.544088] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.928 [2024-07-15 13:17:59.547034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.547078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.928 [2024-07-15 13:17:59.547103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:19:02.928 [2024-07-15 13:17:59.547115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.547236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.547261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.928 [2024-07-15 13:17:59.547285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:02.928 [2024-07-15 13:17:59.547297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.555883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.555945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.928 [2024-07-15 13:17:59.555972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.509 ms 00:19:02.928 [2024-07-15 13:17:59.555984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.556113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.556133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.928 [2024-07-15 13:17:59.556197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:02.928 [2024-07-15 13:17:59.556221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.556327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.556346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.928 [2024-07-15 13:17:59.556370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:02.928 [2024-07-15 13:17:59.556397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.556439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.928 [2024-07-15 13:17:59.558544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.558593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.928 [2024-07-15 13:17:59.558609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:19:02.928 [2024-07-15 13:17:59.558621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.558673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.558699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.928 [2024-07-15 13:17:59.558716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:02.928 [2024-07-15 13:17:59.558727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.558767] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.928 [2024-07-15 13:17:59.558799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.928 [2024-07-15 13:17:59.558849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.928 [2024-07-15 13:17:59.558874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:02.928 [2024-07-15 13:17:59.558976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.928 [2024-07-15 13:17:59.559010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.928 [2024-07-15 13:17:59.559026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:02.928 [2024-07-15 13:17:59.559050] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559073] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559085] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:02.928 [2024-07-15 13:17:59.559097] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.928 [2024-07-15 13:17:59.559109] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.928 [2024-07-15 13:17:59.559120] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.928 [2024-07-15 13:17:59.559156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.559172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.928 [2024-07-15 13:17:59.559185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:19:02.928 [2024-07-15 13:17:59.559212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.559308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.928 [2024-07-15 13:17:59.559324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.928 [2024-07-15 13:17:59.559336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:02.928 [2024-07-15 13:17:59.559347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.928 [2024-07-15 13:17:59.559459] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.928 [2024-07-15 13:17:59.559477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.928 [2024-07-15 13:17:59.559491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.928 [2024-07-15 13:17:59.559529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.928 [2024-07-15 13:17:59.559564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.928 [2024-07-15 13:17:59.559585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.928 [2024-07-15 13:17:59.559596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:02.928 [2024-07-15 13:17:59.559607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.928 [2024-07-15 13:17:59.559618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.928 [2024-07-15 13:17:59.559629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:02.928 [2024-07-15 13:17:59.559640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.928 [2024-07-15 13:17:59.559668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.928 [2024-07-15 13:17:59.559711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.928 [2024-07-15 13:17:59.559744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.928 [2024-07-15 13:17:59.559777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.928 [2024-07-15 13:17:59.559809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.928 [2024-07-15 13:17:59.559830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.928 [2024-07-15 13:17:59.559847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.928 [2024-07-15 13:17:59.559870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.928 [2024-07-15 13:17:59.559881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:02.928 [2024-07-15 13:17:59.559891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.928 [2024-07-15 13:17:59.559903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.928 [2024-07-15 13:17:59.559913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:02.928 [2024-07-15 13:17:59.559924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.928 [2024-07-15 13:17:59.559946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:02.928 [2024-07-15 13:17:59.559958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.559968] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.928 [2024-07-15 13:17:59.559980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.928 [2024-07-15 13:17:59.559991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.928 [2024-07-15 13:17:59.560012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.928 [2024-07-15 13:17:59.560025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.928 [2024-07-15 13:17:59.560042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.928 [2024-07-15 13:17:59.560054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.928 [2024-07-15 13:17:59.560066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.928 [2024-07-15 13:17:59.560077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.928 [2024-07-15 13:17:59.560088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.928 [2024-07-15 13:17:59.560100] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.929 [2024-07-15 13:17:59.560114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.560128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:02.929 [2024-07-15 13:17:59.560140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:02.929 [2024-07-15 13:17:59.560560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:02.929 [2024-07-15 13:17:59.560702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:02.929 [2024-07-15 13:17:59.560772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:02.929 [2024-07-15 13:17:59.560835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:02.929 [2024-07-15 13:17:59.560968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:02.929 [2024-07-15 13:17:59.561030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:02.929 [2024-07-15 13:17:59.561086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:02.929 [2024-07-15 13:17:59.561237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.561469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.561532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.561588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.561726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:02.929 [2024-07-15 13:17:59.561831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.929 [2024-07-15 13:17:59.561926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.562086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.929 [2024-07-15 13:17:59.562175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.929 [2024-07-15 13:17:59.562257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.929 [2024-07-15 13:17:59.562390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.929 [2024-07-15 13:17:59.562455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.562483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.929 [2024-07-15 13:17:59.562497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:19:02.929 [2024-07-15 13:17:59.562525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.590739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.590837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.929 [2024-07-15 13:17:59.590866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.111 ms 00:19:02.929 [2024-07-15 13:17:59.590885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.591056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.591080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.929 [2024-07-15 13:17:59.591098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:02.929 [2024-07-15 13:17:59.591123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.604269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.604341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.929 [2024-07-15 13:17:59.604372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:19:02.929 [2024-07-15 13:17:59.604385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.604457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.604475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.929 [2024-07-15 13:17:59.604489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:02.929 [2024-07-15 13:17:59.604508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.605127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.605190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.929 [2024-07-15 13:17:59.605215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:02.929 [2024-07-15 13:17:59.605227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.605427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.605446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.929 [2024-07-15 13:17:59.605459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:02.929 [2024-07-15 13:17:59.605471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.613091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.613140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.929 [2024-07-15 13:17:59.613209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.587 ms 00:19:02.929 [2024-07-15 13:17:59.613229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.616234] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:02.929 [2024-07-15 13:17:59.616282] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.929 [2024-07-15 13:17:59.616301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.616315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.929 [2024-07-15 13:17:59.616328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:19:02.929 [2024-07-15 13:17:59.616341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.632287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.632354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.929 [2024-07-15 13:17:59.632388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.898 ms 00:19:02.929 [2024-07-15 13:17:59.632401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.634484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.634527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.929 [2024-07-15 13:17:59.634544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:19:02.929 [2024-07-15 13:17:59.634555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.636266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.636306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.929 [2024-07-15 13:17:59.636324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:19:02.929 [2024-07-15 13:17:59.636335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.636765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.636795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.929 [2024-07-15 13:17:59.636811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:19:02.929 [2024-07-15 13:17:59.636823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.929 [2024-07-15 13:17:59.659753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.929 [2024-07-15 13:17:59.659847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.929 [2024-07-15 13:17:59.659870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:19:02.929 [2024-07-15 13:17:59.659883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.187 [2024-07-15 13:17:59.668228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:03.187 [2024-07-15 13:17:59.671996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.187 [2024-07-15 13:17:59.672044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.187 [2024-07-15 13:17:59.672063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.026 ms 00:19:03.187 [2024-07-15 13:17:59.672087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.187 [2024-07-15 13:17:59.672242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.187 [2024-07-15 13:17:59.672266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:03.187 [2024-07-15 13:17:59.672281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:03.187 [2024-07-15 13:17:59.672293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.187 [2024-07-15 13:17:59.672399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.187 [2024-07-15 13:17:59.672418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:03.187 [2024-07-15 13:17:59.672436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:03.187 [2024-07-15 13:17:59.672452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.188 [2024-07-15 13:17:59.672486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.188 [2024-07-15 13:17:59.672511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:03.188 [2024-07-15 13:17:59.672524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:03.188 [2024-07-15 13:17:59.672535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.188 [2024-07-15 13:17:59.672579] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:03.188 [2024-07-15 13:17:59.672596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.188 [2024-07-15 13:17:59.672608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:03.188 [2024-07-15 13:17:59.672620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:03.188 [2024-07-15 13:17:59.672672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.188 [2024-07-15 13:17:59.677018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.188 [2024-07-15 13:17:59.677064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:03.188 [2024-07-15 13:17:59.677083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:19:03.188 [2024-07-15 13:17:59.677104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.188 [2024-07-15 13:17:59.677219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.188 [2024-07-15 13:17:59.677243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:03.188 [2024-07-15 13:17:59.677257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:03.188 [2024-07-15 13:17:59.677269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.188 [2024-07-15 13:17:59.678618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.670 ms, result 0 00:19:41.991  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (28 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 106/1024 [MB] (23 MBps) Copying: 132/1024 [MB] (25 MBps) Copying: 157/1024 [MB] (25 MBps) Copying: 184/1024 [MB] (26 MBps) Copying: 212/1024 [MB] (28 MBps) Copying: 239/1024 [MB] (27 MBps) Copying: 266/1024 [MB] (26 MBps) Copying: 291/1024 [MB] (24 MBps) Copying: 319/1024 [MB] (28 MBps) Copying: 346/1024 [MB] (26 MBps) Copying: 372/1024 [MB] (26 MBps) Copying: 398/1024 [MB] (26 MBps) Copying: 425/1024 [MB] (26 MBps) Copying: 451/1024 [MB] (26 MBps) Copying: 477/1024 [MB] (26 MBps) Copying: 504/1024 [MB] (26 MBps) Copying: 530/1024 [MB] (26 MBps) Copying: 556/1024 [MB] (25 MBps) Copying: 582/1024 [MB] (26 MBps) Copying: 606/1024 [MB] (24 MBps) Copying: 633/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 685/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (24 MBps) Copying: 737/1024 [MB] (27 MBps) Copying: 765/1024 [MB] (27 MBps) Copying: 790/1024 [MB] (25 MBps) Copying: 817/1024 [MB] (27 MBps) Copying: 843/1024 [MB] (25 MBps) Copying: 869/1024 [MB] (25 MBps) Copying: 895/1024 [MB] (26 MBps) Copying: 923/1024 [MB] (27 MBps) Copying: 948/1024 [MB] (25 MBps) Copying: 975/1024 [MB] (26 MBps) Copying: 1003/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 13:18:38.483478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.483565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.991 [2024-07-15 13:18:38.483588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:41.991 [2024-07-15 13:18:38.483609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.483640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.991 [2024-07-15 13:18:38.484506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.484536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.991 [2024-07-15 13:18:38.484551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:19:41.991 [2024-07-15 13:18:38.484562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.486063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.486116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.991 [2024-07-15 13:18:38.486133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:19:41.991 [2024-07-15 13:18:38.486170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.501867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.501918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.991 [2024-07-15 13:18:38.501936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.643 ms 00:19:41.991 [2024-07-15 13:18:38.501949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.508453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.508492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.991 [2024-07-15 13:18:38.508508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.462 ms 00:19:41.991 [2024-07-15 13:18:38.508519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.510250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.510289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.991 [2024-07-15 13:18:38.510305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:19:41.991 [2024-07-15 13:18:38.510316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.514301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.514342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.991 [2024-07-15 13:18:38.514377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:19:41.991 [2024-07-15 13:18:38.514404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.514546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.514565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.991 [2024-07-15 13:18:38.514579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:41.991 [2024-07-15 13:18:38.514591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.516310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.516349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:41.991 [2024-07-15 13:18:38.516364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:19:41.991 [2024-07-15 13:18:38.516375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.517731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.517766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:41.991 [2024-07-15 13:18:38.517781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:19:41.991 [2024-07-15 13:18:38.517791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.518979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.519016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.991 [2024-07-15 13:18:38.519031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:19:41.991 [2024-07-15 13:18:38.519041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.520061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.991 [2024-07-15 13:18:38.520097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.991 [2024-07-15 13:18:38.520113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:19:41.991 [2024-07-15 13:18:38.520124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.991 [2024-07-15 13:18:38.520170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.991 [2024-07-15 13:18:38.520194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.520997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.991 [2024-07-15 13:18:38.521307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.992 [2024-07-15 13:18:38.521463] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.992 [2024-07-15 13:18:38.521476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48704cf3-47fc-45de-bbc6-7542cac85d09 00:19:41.992 [2024-07-15 13:18:38.521488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.992 [2024-07-15 13:18:38.521500] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.992 [2024-07-15 13:18:38.521511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.992 [2024-07-15 13:18:38.521523] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.992 [2024-07-15 13:18:38.521534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.992 [2024-07-15 13:18:38.521546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.992 [2024-07-15 13:18:38.521557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.992 [2024-07-15 13:18:38.521568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.992 [2024-07-15 13:18:38.521579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.992 [2024-07-15 13:18:38.521591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.992 [2024-07-15 13:18:38.521609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.992 [2024-07-15 13:18:38.521622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:19:41.992 [2024-07-15 13:18:38.521633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.523744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.992 [2024-07-15 13:18:38.523776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.992 [2024-07-15 13:18:38.523792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:19:41.992 [2024-07-15 13:18:38.523803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.523940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.992 [2024-07-15 13:18:38.523956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.992 [2024-07-15 13:18:38.523969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:41.992 [2024-07-15 13:18:38.523980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.531043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.531103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.992 [2024-07-15 13:18:38.531122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.531134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.531259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.531275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.992 [2024-07-15 13:18:38.531287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.531299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.531403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.531422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.992 [2024-07-15 13:18:38.531435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.531447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.531471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.531491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.992 [2024-07-15 13:18:38.531503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.531514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.547737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.547815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.992 [2024-07-15 13:18:38.547837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.547850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.557995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.992 [2024-07-15 13:18:38.558102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.992 [2024-07-15 13:18:38.558264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.992 [2024-07-15 13:18:38.558375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.992 [2024-07-15 13:18:38.558545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.992 [2024-07-15 13:18:38.558664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.992 [2024-07-15 13:18:38.558758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.558822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.992 [2024-07-15 13:18:38.558838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.992 [2024-07-15 13:18:38.558856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.992 [2024-07-15 13:18:38.558868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.992 [2024-07-15 13:18:38.559012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.498 ms, result 0 00:19:42.615 00:19:42.615 00:19:42.615 13:18:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:42.615 [2024-07-15 13:18:39.101457] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:42.616 [2024-07-15 13:18:39.101669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90842 ] 00:19:42.616 [2024-07-15 13:18:39.249494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.891 [2024-07-15 13:18:39.349853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.891 [2024-07-15 13:18:39.480774] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.891 [2024-07-15 13:18:39.480881] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.150 [2024-07-15 13:18:39.635453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.150 [2024-07-15 13:18:39.635543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.150 [2024-07-15 13:18:39.635574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:43.150 [2024-07-15 13:18:39.635587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.150 [2024-07-15 13:18:39.635684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.150 [2024-07-15 13:18:39.635715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.150 [2024-07-15 13:18:39.635743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:43.150 [2024-07-15 13:18:39.635759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.150 [2024-07-15 13:18:39.635795] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.150 [2024-07-15 13:18:39.636207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.150 [2024-07-15 13:18:39.636242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.150 [2024-07-15 13:18:39.636260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.150 [2024-07-15 13:18:39.636284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:19:43.151 [2024-07-15 13:18:39.636296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.638299] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.151 [2024-07-15 13:18:39.641286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.641339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.151 [2024-07-15 13:18:39.641369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:19:43.151 [2024-07-15 13:18:39.641381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.641464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.641487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.151 [2024-07-15 13:18:39.641500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:43.151 [2024-07-15 13:18:39.641511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.650474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.650547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.151 [2024-07-15 13:18:39.650566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.876 ms 00:19:43.151 [2024-07-15 13:18:39.650578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.650727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.650753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.151 [2024-07-15 13:18:39.650767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:43.151 [2024-07-15 13:18:39.650787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.650901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.650920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.151 [2024-07-15 13:18:39.650941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:43.151 [2024-07-15 13:18:39.650952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.650996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.151 [2024-07-15 13:18:39.653220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.653254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.151 [2024-07-15 13:18:39.653270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:19:43.151 [2024-07-15 13:18:39.653281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.653345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.653362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.151 [2024-07-15 13:18:39.653387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:43.151 [2024-07-15 13:18:39.653399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.653442] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.151 [2024-07-15 13:18:39.653473] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:43.151 [2024-07-15 13:18:39.653525] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.151 [2024-07-15 13:18:39.653560] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:43.151 [2024-07-15 13:18:39.653673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.151 [2024-07-15 13:18:39.653707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.151 [2024-07-15 13:18:39.653724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:43.151 [2024-07-15 13:18:39.653739] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.151 [2024-07-15 13:18:39.653763] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.151 [2024-07-15 13:18:39.653775] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.151 [2024-07-15 13:18:39.653786] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.151 [2024-07-15 13:18:39.653796] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.151 [2024-07-15 13:18:39.653816] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.151 [2024-07-15 13:18:39.653829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.653848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.151 [2024-07-15 13:18:39.653860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:19:43.151 [2024-07-15 13:18:39.653875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.653981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-15 13:18:39.653998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.151 [2024-07-15 13:18:39.654011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:43.151 [2024-07-15 13:18:39.654022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-15 13:18:39.654187] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.151 [2024-07-15 13:18:39.654216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.151 [2024-07-15 13:18:39.654248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.151 [2024-07-15 13:18:39.654292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.151 [2024-07-15 13:18:39.654323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.151 [2024-07-15 13:18:39.654343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.151 [2024-07-15 13:18:39.654353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.151 [2024-07-15 13:18:39.654363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.151 [2024-07-15 13:18:39.654373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.151 [2024-07-15 13:18:39.654383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:43.151 [2024-07-15 13:18:39.654393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.151 [2024-07-15 13:18:39.654420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.151 [2024-07-15 13:18:39.654452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.151 [2024-07-15 13:18:39.654483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.151 [2024-07-15 13:18:39.654513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.151 [2024-07-15 13:18:39.654543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.151 [2024-07-15 13:18:39.654579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.151 [2024-07-15 13:18:39.654600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.151 [2024-07-15 13:18:39.654616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:43.151 [2024-07-15 13:18:39.654627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.151 [2024-07-15 13:18:39.654637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.151 [2024-07-15 13:18:39.654647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:43.151 [2024-07-15 13:18:39.654657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.151 [2024-07-15 13:18:39.654677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:43.151 [2024-07-15 13:18:39.654686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654696] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.151 [2024-07-15 13:18:39.654707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.151 [2024-07-15 13:18:39.654718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.151 [2024-07-15 13:18:39.654749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.151 [2024-07-15 13:18:39.654764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.151 [2024-07-15 13:18:39.654775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.151 [2024-07-15 13:18:39.654786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.151 [2024-07-15 13:18:39.654796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.151 [2024-07-15 13:18:39.654806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.151 [2024-07-15 13:18:39.654818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.151 [2024-07-15 13:18:39.654832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.151 [2024-07-15 13:18:39.654844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.151 [2024-07-15 13:18:39.654859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:43.151 [2024-07-15 13:18:39.654870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:43.151 [2024-07-15 13:18:39.654891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:43.151 [2024-07-15 13:18:39.654903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:43.152 [2024-07-15 13:18:39.654920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:43.152 [2024-07-15 13:18:39.654932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:43.152 [2024-07-15 13:18:39.654951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:43.152 [2024-07-15 13:18:39.654965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:43.152 [2024-07-15 13:18:39.654986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.654998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.655019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.655036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.655052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:43.152 [2024-07-15 13:18:39.655067] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.152 [2024-07-15 13:18:39.655083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.655096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.152 [2024-07-15 13:18:39.655108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.152 [2024-07-15 13:18:39.655131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.152 [2024-07-15 13:18:39.655155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.152 [2024-07-15 13:18:39.655175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.655194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.152 [2024-07-15 13:18:39.655210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:19:43.152 [2024-07-15 13:18:39.655230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.679415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.679483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.152 [2024-07-15 13:18:39.679506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.079 ms 00:19:43.152 [2024-07-15 13:18:39.679518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.679668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.679687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:43.152 [2024-07-15 13:18:39.679700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:43.152 [2024-07-15 13:18:39.679720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.692975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.693052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.152 [2024-07-15 13:18:39.693073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.140 ms 00:19:43.152 [2024-07-15 13:18:39.693085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.693173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.693192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.152 [2024-07-15 13:18:39.693206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.152 [2024-07-15 13:18:39.693225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.693890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.693935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.152 [2024-07-15 13:18:39.693951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:19:43.152 [2024-07-15 13:18:39.693975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.694211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.694242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.152 [2024-07-15 13:18:39.694256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:43.152 [2024-07-15 13:18:39.694267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.702297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.702343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.152 [2024-07-15 13:18:39.702360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.992 ms 00:19:43.152 [2024-07-15 13:18:39.702390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.705480] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:43.152 [2024-07-15 13:18:39.705523] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:43.152 [2024-07-15 13:18:39.705547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.705559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:43.152 [2024-07-15 13:18:39.705572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:19:43.152 [2024-07-15 13:18:39.705583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.721568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.721647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:43.152 [2024-07-15 13:18:39.721667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.938 ms 00:19:43.152 [2024-07-15 13:18:39.721680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.724073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.724124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:43.152 [2024-07-15 13:18:39.724141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:19:43.152 [2024-07-15 13:18:39.724169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.725717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.725755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:43.152 [2024-07-15 13:18:39.725771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.503 ms 00:19:43.152 [2024-07-15 13:18:39.725782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.726300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.726333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:43.152 [2024-07-15 13:18:39.726349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:43.152 [2024-07-15 13:18:39.726360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.749173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.749234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.152 [2024-07-15 13:18:39.749256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.778 ms 00:19:43.152 [2024-07-15 13:18:39.749268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.757540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:43.152 [2024-07-15 13:18:39.761648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.761698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.152 [2024-07-15 13:18:39.761717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.289 ms 00:19:43.152 [2024-07-15 13:18:39.761729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.761852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.761873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.152 [2024-07-15 13:18:39.761887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:43.152 [2024-07-15 13:18:39.761908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.762016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.762045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.152 [2024-07-15 13:18:39.762064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:43.152 [2024-07-15 13:18:39.762075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.762111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.762126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.152 [2024-07-15 13:18:39.762139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.152 [2024-07-15 13:18:39.762174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.762220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.152 [2024-07-15 13:18:39.762239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.762251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.152 [2024-07-15 13:18:39.762280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:43.152 [2024-07-15 13:18:39.762292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.766347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.766392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.152 [2024-07-15 13:18:39.766410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.025 ms 00:19:43.152 [2024-07-15 13:18:39.766422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.766507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.152 [2024-07-15 13:18:39.766526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.152 [2024-07-15 13:18:39.766539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:43.152 [2024-07-15 13:18:39.766558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.152 [2024-07-15 13:18:39.767935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.977 ms, result 0 00:20:25.112  Copying: 23/1024 [MB] (23 MBps) Copying: 50/1024 [MB] (26 MBps) Copying: 77/1024 [MB] (26 MBps) Copying: 103/1024 [MB] (26 MBps) Copying: 129/1024 [MB] (25 MBps) Copying: 155/1024 [MB] (26 MBps) Copying: 180/1024 [MB] (25 MBps) Copying: 206/1024 [MB] (25 MBps) Copying: 231/1024 [MB] (24 MBps) Copying: 256/1024 [MB] (24 MBps) Copying: 280/1024 [MB] (24 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (25 MBps) Copying: 353/1024 [MB] (22 MBps) Copying: 375/1024 [MB] (21 MBps) Copying: 397/1024 [MB] (22 MBps) Copying: 421/1024 [MB] (24 MBps) Copying: 448/1024 [MB] (26 MBps) Copying: 473/1024 [MB] (25 MBps) Copying: 496/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (22 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 563/1024 [MB] (20 MBps) Copying: 586/1024 [MB] (22 MBps) Copying: 610/1024 [MB] (24 MBps) Copying: 634/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (22 MBps) Copying: 683/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (26 MBps) Copying: 736/1024 [MB] (26 MBps) Copying: 760/1024 [MB] (23 MBps) Copying: 787/1024 [MB] (27 MBps) Copying: 814/1024 [MB] (26 MBps) Copying: 839/1024 [MB] (25 MBps) Copying: 864/1024 [MB] (25 MBps) Copying: 887/1024 [MB] (22 MBps) Copying: 910/1024 [MB] (22 MBps) Copying: 936/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (26 MBps) Copying: 988/1024 [MB] (24 MBps) Copying: 1010/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 13:19:21.668492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.668658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:25.112 [2024-07-15 13:19:21.668703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:25.112 [2024-07-15 13:19:21.668725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.668777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.112 [2024-07-15 13:19:21.669824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.669875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:25.112 [2024-07-15 13:19:21.669901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:20:25.112 [2024-07-15 13:19:21.669921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.670447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.670504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:25.112 [2024-07-15 13:19:21.670529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:25.112 [2024-07-15 13:19:21.670560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.677507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.677626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:25.112 [2024-07-15 13:19:21.677661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:20:25.112 [2024-07-15 13:19:21.677677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.686480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.686595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:25.112 [2024-07-15 13:19:21.686633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.736 ms 00:20:25.112 [2024-07-15 13:19:21.686667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.688918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.688973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:25.112 [2024-07-15 13:19:21.688995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:20:25.112 [2024-07-15 13:19:21.689010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.693669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.693755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:25.112 [2024-07-15 13:19:21.693796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:20:25.112 [2024-07-15 13:19:21.693812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.694005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.694029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:25.112 [2024-07-15 13:19:21.694046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:25.112 [2024-07-15 13:19:21.694067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.112 [2024-07-15 13:19:21.696942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.112 [2024-07-15 13:19:21.697021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:25.113 [2024-07-15 13:19:21.697042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:20:25.113 [2024-07-15 13:19:21.697056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.113 [2024-07-15 13:19:21.699070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.113 [2024-07-15 13:19:21.699155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:25.113 [2024-07-15 13:19:21.699179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:20:25.113 [2024-07-15 13:19:21.699194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.113 [2024-07-15 13:19:21.700593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.113 [2024-07-15 13:19:21.700651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:25.113 [2024-07-15 13:19:21.700675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:20:25.113 [2024-07-15 13:19:21.700695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.113 [2024-07-15 13:19:21.702260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.113 [2024-07-15 13:19:21.702340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:25.113 [2024-07-15 13:19:21.702377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:20:25.113 [2024-07-15 13:19:21.702399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.113 [2024-07-15 13:19:21.702432] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:25.113 [2024-07-15 13:19:21.702458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.702984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:25.113 [2024-07-15 13:19:21.703900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.703925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.703950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.703977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.703999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:25.114 [2024-07-15 13:19:21.704405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:25.114 [2024-07-15 13:19:21.704426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48704cf3-47fc-45de-bbc6-7542cac85d09 00:20:25.114 [2024-07-15 13:19:21.704450] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:25.114 [2024-07-15 13:19:21.704474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:25.114 [2024-07-15 13:19:21.704510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:25.114 [2024-07-15 13:19:21.704534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:25.114 [2024-07-15 13:19:21.704556] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:25.114 [2024-07-15 13:19:21.704579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:25.114 [2024-07-15 13:19:21.704613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:25.114 [2024-07-15 13:19:21.704637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:25.114 [2024-07-15 13:19:21.704663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:25.114 [2024-07-15 13:19:21.704685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.114 [2024-07-15 13:19:21.704700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:25.114 [2024-07-15 13:19:21.704730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:20:25.114 [2024-07-15 13:19:21.704745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.707610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.114 [2024-07-15 13:19:21.707666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:25.114 [2024-07-15 13:19:21.707686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:20:25.114 [2024-07-15 13:19:21.707701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.707870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.114 [2024-07-15 13:19:21.707897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:25.114 [2024-07-15 13:19:21.707928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:25.114 [2024-07-15 13:19:21.707943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.716133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.716291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.114 [2024-07-15 13:19:21.716324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.716356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.716482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.716504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.114 [2024-07-15 13:19:21.716520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.716535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.716624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.716666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.114 [2024-07-15 13:19:21.716683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.716698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.716736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.716761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.114 [2024-07-15 13:19:21.716777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.716798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.735724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.735815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.114 [2024-07-15 13:19:21.735839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.735855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.747990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.114 [2024-07-15 13:19:21.748109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.114 [2024-07-15 13:19:21.748291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.114 [2024-07-15 13:19:21.748432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.114 [2024-07-15 13:19:21.748625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.114 [2024-07-15 13:19:21.748754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.114 [2024-07-15 13:19:21.748876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.748891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.748957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.114 [2024-07-15 13:19:21.748986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.114 [2024-07-15 13:19:21.749002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.114 [2024-07-15 13:19:21.749017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.114 [2024-07-15 13:19:21.749374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.726 ms, result 0 00:20:25.373 00:20:25.373 00:20:25.373 13:19:22 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.901 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:27.901 13:19:24 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:27.901 [2024-07-15 13:19:24.442941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:27.901 [2024-07-15 13:19:24.443170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91295 ] 00:20:27.901 [2024-07-15 13:19:24.594724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.160 [2024-07-15 13:19:24.693454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.160 [2024-07-15 13:19:24.818642] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.160 [2024-07-15 13:19:24.818727] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.419 [2024-07-15 13:19:24.971375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.419 [2024-07-15 13:19:24.971460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.419 [2024-07-15 13:19:24.971489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.419 [2024-07-15 13:19:24.971503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.419 [2024-07-15 13:19:24.971592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.419 [2024-07-15 13:19:24.971613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.419 [2024-07-15 13:19:24.971626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:28.419 [2024-07-15 13:19:24.971641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.419 [2024-07-15 13:19:24.971676] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.419 [2024-07-15 13:19:24.972017] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.419 [2024-07-15 13:19:24.972052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.419 [2024-07-15 13:19:24.972070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.419 [2024-07-15 13:19:24.972083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:20:28.419 [2024-07-15 13:19:24.972094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.419 [2024-07-15 13:19:24.974042] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:28.419 [2024-07-15 13:19:24.976921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.419 [2024-07-15 13:19:24.976971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:28.419 [2024-07-15 13:19:24.976996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:20:28.419 [2024-07-15 13:19:24.977008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.419 [2024-07-15 13:19:24.977082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.977110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:28.420 [2024-07-15 13:19:24.977123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:28.420 [2024-07-15 13:19:24.977134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.985657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.985733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.420 [2024-07-15 13:19:24.985753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.420 ms 00:20:28.420 [2024-07-15 13:19:24.985764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.985912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.985931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.420 [2024-07-15 13:19:24.985953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:28.420 [2024-07-15 13:19:24.985964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.986070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.986088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.420 [2024-07-15 13:19:24.986111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:28.420 [2024-07-15 13:19:24.986122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.986222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.420 [2024-07-15 13:19:24.988320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.988367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.420 [2024-07-15 13:19:24.988383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:20:28.420 [2024-07-15 13:19:24.988394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.988446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.988472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.420 [2024-07-15 13:19:24.988490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:28.420 [2024-07-15 13:19:24.988501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.988551] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:28.420 [2024-07-15 13:19:24.988582] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:28.420 [2024-07-15 13:19:24.988633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:28.420 [2024-07-15 13:19:24.988660] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:28.420 [2024-07-15 13:19:24.988791] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.420 [2024-07-15 13:19:24.988818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.420 [2024-07-15 13:19:24.988833] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:28.420 [2024-07-15 13:19:24.988849] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.420 [2024-07-15 13:19:24.988862] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.420 [2024-07-15 13:19:24.988874] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:28.420 [2024-07-15 13:19:24.988886] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.420 [2024-07-15 13:19:24.988897] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.420 [2024-07-15 13:19:24.988917] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.420 [2024-07-15 13:19:24.988938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.988957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.420 [2024-07-15 13:19:24.988969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:20:28.420 [2024-07-15 13:19:24.988984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.989092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-15 13:19:24.989118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.420 [2024-07-15 13:19:24.989132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:28.420 [2024-07-15 13:19:24.989169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-15 13:19:24.989285] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.420 [2024-07-15 13:19:24.989311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.420 [2024-07-15 13:19:24.989325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.420 [2024-07-15 13:19:24.989363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.420 [2024-07-15 13:19:24.989395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.420 [2024-07-15 13:19:24.989415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.420 [2024-07-15 13:19:24.989425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:28.420 [2024-07-15 13:19:24.989436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.420 [2024-07-15 13:19:24.989446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.420 [2024-07-15 13:19:24.989456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:28.420 [2024-07-15 13:19:24.989466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.420 [2024-07-15 13:19:24.989490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.420 [2024-07-15 13:19:24.989525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.420 [2024-07-15 13:19:24.989556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.420 [2024-07-15 13:19:24.989586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.420 [2024-07-15 13:19:24.989617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.420 [2024-07-15 13:19:24.989654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.420 [2024-07-15 13:19:24.989675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.420 [2024-07-15 13:19:24.989686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:28.420 [2024-07-15 13:19:24.989696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.420 [2024-07-15 13:19:24.989706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.420 [2024-07-15 13:19:24.989716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:28.420 [2024-07-15 13:19:24.989726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.420 [2024-07-15 13:19:24.989747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:28.420 [2024-07-15 13:19:24.989757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989768] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.420 [2024-07-15 13:19:24.989779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.420 [2024-07-15 13:19:24.989791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.420 [2024-07-15 13:19:24.989813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.420 [2024-07-15 13:19:24.989826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.420 [2024-07-15 13:19:24.989838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.420 [2024-07-15 13:19:24.989849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.420 [2024-07-15 13:19:24.989860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.420 [2024-07-15 13:19:24.989870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.420 [2024-07-15 13:19:24.989882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.420 [2024-07-15 13:19:24.989896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.420 [2024-07-15 13:19:24.989909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:28.420 [2024-07-15 13:19:24.989922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:28.420 [2024-07-15 13:19:24.989934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:28.420 [2024-07-15 13:19:24.989945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:28.420 [2024-07-15 13:19:24.989957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:28.420 [2024-07-15 13:19:24.989968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:28.420 [2024-07-15 13:19:24.989980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:28.420 [2024-07-15 13:19:24.989992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:28.421 [2024-07-15 13:19:24.990003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:28.421 [2024-07-15 13:19:24.990017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:28.421 [2024-07-15 13:19:24.990076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.421 [2024-07-15 13:19:24.990099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.421 [2024-07-15 13:19:24.990170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.421 [2024-07-15 13:19:24.990198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.421 [2024-07-15 13:19:24.990210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.421 [2024-07-15 13:19:24.990223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:24.990243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.421 [2024-07-15 13:19:24.990263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:20:28.421 [2024-07-15 13:19:24.990289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.013424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.013513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.421 [2024-07-15 13:19:25.013537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.036 ms 00:20:28.421 [2024-07-15 13:19:25.013551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.013687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.013704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.421 [2024-07-15 13:19:25.013718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:28.421 [2024-07-15 13:19:25.013745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.026600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.026671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.421 [2024-07-15 13:19:25.026693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:20:28.421 [2024-07-15 13:19:25.026705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.026783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.026799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.421 [2024-07-15 13:19:25.026825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.421 [2024-07-15 13:19:25.026841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.027496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.027533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.421 [2024-07-15 13:19:25.027549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:20:28.421 [2024-07-15 13:19:25.027561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.027737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.027770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.421 [2024-07-15 13:19:25.027784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:28.421 [2024-07-15 13:19:25.027795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.035521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.035590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.421 [2024-07-15 13:19:25.035610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.686 ms 00:20:28.421 [2024-07-15 13:19:25.035623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.038722] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:28.421 [2024-07-15 13:19:25.038774] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:28.421 [2024-07-15 13:19:25.038801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.038814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:28.421 [2024-07-15 13:19:25.038835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:20:28.421 [2024-07-15 13:19:25.038846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.054958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.055047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:28.421 [2024-07-15 13:19:25.055069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.012 ms 00:20:28.421 [2024-07-15 13:19:25.055082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.058097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.058166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:28.421 [2024-07-15 13:19:25.058186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:20:28.421 [2024-07-15 13:19:25.058198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.059838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.059877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:28.421 [2024-07-15 13:19:25.059893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:20:28.421 [2024-07-15 13:19:25.059904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.060421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.060457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.421 [2024-07-15 13:19:25.060473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:20:28.421 [2024-07-15 13:19:25.060485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.085499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.085583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:28.421 [2024-07-15 13:19:25.085607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.979 ms 00:20:28.421 [2024-07-15 13:19:25.085620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.095153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:28.421 [2024-07-15 13:19:25.099621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.099673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.421 [2024-07-15 13:19:25.099695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.903 ms 00:20:28.421 [2024-07-15 13:19:25.099719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.099856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.099877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:28.421 [2024-07-15 13:19:25.099891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:28.421 [2024-07-15 13:19:25.099903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.100010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.100044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.421 [2024-07-15 13:19:25.100059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:28.421 [2024-07-15 13:19:25.100070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.100106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.100126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.421 [2024-07-15 13:19:25.100140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:28.421 [2024-07-15 13:19:25.100168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.100213] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:28.421 [2024-07-15 13:19:25.100230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.100246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:28.421 [2024-07-15 13:19:25.100271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:28.421 [2024-07-15 13:19:25.100282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.104593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.104640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.421 [2024-07-15 13:19:25.104658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.280 ms 00:20:28.421 [2024-07-15 13:19:25.104671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.104756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-15 13:19:25.104775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.421 [2024-07-15 13:19:25.104805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:28.421 [2024-07-15 13:19:25.104816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-15 13:19:25.106111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.257 ms, result 0 00:21:07.072  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (25 MBps) Copying: 106/1024 [MB] (26 MBps) Copying: 134/1024 [MB] (27 MBps) Copying: 162/1024 [MB] (28 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 219/1024 [MB] (29 MBps) Copying: 245/1024 [MB] (25 MBps) Copying: 274/1024 [MB] (29 MBps) Copying: 303/1024 [MB] (28 MBps) Copying: 331/1024 [MB] (28 MBps) Copying: 361/1024 [MB] (29 MBps) Copying: 388/1024 [MB] (27 MBps) Copying: 415/1024 [MB] (27 MBps) Copying: 443/1024 [MB] (27 MBps) Copying: 471/1024 [MB] (28 MBps) Copying: 500/1024 [MB] (28 MBps) Copying: 526/1024 [MB] (26 MBps) Copying: 555/1024 [MB] (28 MBps) Copying: 584/1024 [MB] (29 MBps) Copying: 609/1024 [MB] (24 MBps) Copying: 635/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (27 MBps) Copying: 687/1024 [MB] (25 MBps) Copying: 713/1024 [MB] (25 MBps) Copying: 740/1024 [MB] (26 MBps) Copying: 769/1024 [MB] (29 MBps) Copying: 792/1024 [MB] (22 MBps) Copying: 821/1024 [MB] (28 MBps) Copying: 848/1024 [MB] (27 MBps) Copying: 876/1024 [MB] (27 MBps) Copying: 905/1024 [MB] (28 MBps) Copying: 930/1024 [MB] (25 MBps) Copying: 958/1024 [MB] (27 MBps) Copying: 987/1024 [MB] (28 MBps) Copying: 1015/1024 [MB] (28 MBps) Copying: 1048192/1048576 [kB] (8516 kBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 13:20:03.631850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.072 [2024-07-15 13:20:03.631941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:07.072 [2024-07-15 13:20:03.631967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:07.072 [2024-07-15 13:20:03.631980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.072 [2024-07-15 13:20:03.635179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:07.072 [2024-07-15 13:20:03.637251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.072 [2024-07-15 13:20:03.637310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:07.072 [2024-07-15 13:20:03.637329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:21:07.072 [2024-07-15 13:20:03.637342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.072 [2024-07-15 13:20:03.651081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.072 [2024-07-15 13:20:03.651191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:07.072 [2024-07-15 13:20:03.651216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.432 ms 00:21:07.073 [2024-07-15 13:20:03.651253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.674345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.674460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:07.073 [2024-07-15 13:20:03.674484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.059 ms 00:21:07.073 [2024-07-15 13:20:03.674513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.681021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.681084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:07.073 [2024-07-15 13:20:03.681102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.454 ms 00:21:07.073 [2024-07-15 13:20:03.681117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.683238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.683288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:07.073 [2024-07-15 13:20:03.683306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:21:07.073 [2024-07-15 13:20:03.683317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.687196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.687249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:07.073 [2024-07-15 13:20:03.687280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:21:07.073 [2024-07-15 13:20:03.687292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.782479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.782595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:07.073 [2024-07-15 13:20:03.782641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.119 ms 00:21:07.073 [2024-07-15 13:20:03.782654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.785165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.785211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:07.073 [2024-07-15 13:20:03.785229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:21:07.073 [2024-07-15 13:20:03.785240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.786745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.786787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:07.073 [2024-07-15 13:20:03.786804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:21:07.073 [2024-07-15 13:20:03.786814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.788134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.788191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:07.073 [2024-07-15 13:20:03.788207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:21:07.073 [2024-07-15 13:20:03.788217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.789357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.073 [2024-07-15 13:20:03.789394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:07.073 [2024-07-15 13:20:03.789409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:21:07.073 [2024-07-15 13:20:03.789419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.073 [2024-07-15 13:20:03.789480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:07.073 [2024-07-15 13:20:03.789519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122624 / 261120 wr_cnt: 1 state: open 00:21:07.073 [2024-07-15 13:20:03.789539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.789999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:07.073 [2024-07-15 13:20:03.790383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:07.074 [2024-07-15 13:20:03.790831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:07.074 [2024-07-15 13:20:03.790843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48704cf3-47fc-45de-bbc6-7542cac85d09 00:21:07.074 [2024-07-15 13:20:03.790869] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122624 00:21:07.074 [2024-07-15 13:20:03.790880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123584 00:21:07.074 [2024-07-15 13:20:03.790891] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122624 00:21:07.074 [2024-07-15 13:20:03.790903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:21:07.074 [2024-07-15 13:20:03.790914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:07.074 [2024-07-15 13:20:03.790925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:07.074 [2024-07-15 13:20:03.790936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:07.074 [2024-07-15 13:20:03.790946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:07.074 [2024-07-15 13:20:03.790956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:07.074 [2024-07-15 13:20:03.790969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.074 [2024-07-15 13:20:03.790981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:07.074 [2024-07-15 13:20:03.790993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:21:07.074 [2024-07-15 13:20:03.791004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.793199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.074 [2024-07-15 13:20:03.793236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:07.074 [2024-07-15 13:20:03.793252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:21:07.074 [2024-07-15 13:20:03.793264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.793405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.074 [2024-07-15 13:20:03.793423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:07.074 [2024-07-15 13:20:03.793444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:21:07.074 [2024-07-15 13:20:03.793455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.800556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.074 [2024-07-15 13:20:03.800621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.074 [2024-07-15 13:20:03.800640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.074 [2024-07-15 13:20:03.800651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.800736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.074 [2024-07-15 13:20:03.800751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.074 [2024-07-15 13:20:03.800770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.074 [2024-07-15 13:20:03.800790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.800861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.074 [2024-07-15 13:20:03.800879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.074 [2024-07-15 13:20:03.800891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.074 [2024-07-15 13:20:03.800903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.074 [2024-07-15 13:20:03.800926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.074 [2024-07-15 13:20:03.800939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.074 [2024-07-15 13:20:03.800951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.074 [2024-07-15 13:20:03.800968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.332 [2024-07-15 13:20:03.817849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.332 [2024-07-15 13:20:03.817930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.332 [2024-07-15 13:20:03.817949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.817985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.828406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.333 [2024-07-15 13:20:03.828425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.828448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.828550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.333 [2024-07-15 13:20:03.828562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.828574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.828636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.333 [2024-07-15 13:20:03.828648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.828660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.828776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.333 [2024-07-15 13:20:03.828788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.828800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.828893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.333 [2024-07-15 13:20:03.828906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.828917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.828990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.829026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.333 [2024-07-15 13:20:03.829041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.829053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.829111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.333 [2024-07-15 13:20:03.829211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.333 [2024-07-15 13:20:03.829229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.333 [2024-07-15 13:20:03.829241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.333 [2024-07-15 13:20:03.829439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 200.991 ms, result 0 00:21:08.708 00:21:08.708 00:21:08.708 13:20:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:08.708 [2024-07-15 13:20:05.224308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:08.708 [2024-07-15 13:20:05.224548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91706 ] 00:21:08.708 [2024-07-15 13:20:05.382693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.966 [2024-07-15 13:20:05.482037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.966 [2024-07-15 13:20:05.609461] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:08.966 [2024-07-15 13:20:05.609560] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.226 [2024-07-15 13:20:05.763439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.763530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.226 [2024-07-15 13:20:05.763552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:09.226 [2024-07-15 13:20:05.763564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.763655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.763677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.226 [2024-07-15 13:20:05.763691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:09.226 [2024-07-15 13:20:05.763710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.763742] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.226 [2024-07-15 13:20:05.764190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.226 [2024-07-15 13:20:05.764233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.764251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.226 [2024-07-15 13:20:05.764265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:21:09.226 [2024-07-15 13:20:05.764289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.766350] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:09.226 [2024-07-15 13:20:05.769317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.769364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:09.226 [2024-07-15 13:20:05.769389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:21:09.226 [2024-07-15 13:20:05.769402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.769479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.769507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:09.226 [2024-07-15 13:20:05.769521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:09.226 [2024-07-15 13:20:05.769533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.778205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.778308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.226 [2024-07-15 13:20:05.778327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.588 ms 00:21:09.226 [2024-07-15 13:20:05.778339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.778491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.778517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.226 [2024-07-15 13:20:05.778539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:09.226 [2024-07-15 13:20:05.778551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.778661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.778680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.226 [2024-07-15 13:20:05.778705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:09.226 [2024-07-15 13:20:05.778718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.778757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.226 [2024-07-15 13:20:05.780857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.780893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.226 [2024-07-15 13:20:05.780909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:21:09.226 [2024-07-15 13:20:05.780920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.780975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.780993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.226 [2024-07-15 13:20:05.781010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:09.226 [2024-07-15 13:20:05.781021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.781062] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:09.226 [2024-07-15 13:20:05.781095] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:09.226 [2024-07-15 13:20:05.781161] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:09.226 [2024-07-15 13:20:05.781202] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:09.226 [2024-07-15 13:20:05.781314] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.226 [2024-07-15 13:20:05.781343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.226 [2024-07-15 13:20:05.781358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:09.226 [2024-07-15 13:20:05.781373] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.226 [2024-07-15 13:20:05.781387] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.226 [2024-07-15 13:20:05.781399] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:09.226 [2024-07-15 13:20:05.781410] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.226 [2024-07-15 13:20:05.781422] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.226 [2024-07-15 13:20:05.781433] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.226 [2024-07-15 13:20:05.781445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.781456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.226 [2024-07-15 13:20:05.781477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:21:09.226 [2024-07-15 13:20:05.781493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.781596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.226 [2024-07-15 13:20:05.781615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.226 [2024-07-15 13:20:05.781629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:09.226 [2024-07-15 13:20:05.781640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.226 [2024-07-15 13:20:05.781750] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.226 [2024-07-15 13:20:05.781769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.226 [2024-07-15 13:20:05.781782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.226 [2024-07-15 13:20:05.781794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.226 [2024-07-15 13:20:05.781810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.226 [2024-07-15 13:20:05.781821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.226 [2024-07-15 13:20:05.781833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:09.226 [2024-07-15 13:20:05.781844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.226 [2024-07-15 13:20:05.781855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:09.226 [2024-07-15 13:20:05.781865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.226 [2024-07-15 13:20:05.781875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.226 [2024-07-15 13:20:05.781885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:09.226 [2024-07-15 13:20:05.781896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.226 [2024-07-15 13:20:05.781909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.227 [2024-07-15 13:20:05.781921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:09.227 [2024-07-15 13:20:05.781932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.227 [2024-07-15 13:20:05.781943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.227 [2024-07-15 13:20:05.781954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:09.227 [2024-07-15 13:20:05.781964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.227 [2024-07-15 13:20:05.781975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.227 [2024-07-15 13:20:05.781985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:09.227 [2024-07-15 13:20:05.781995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.227 [2024-07-15 13:20:05.782017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.227 [2024-07-15 13:20:05.782048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.227 [2024-07-15 13:20:05.782082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.227 [2024-07-15 13:20:05.782132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.227 [2024-07-15 13:20:05.782180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.227 [2024-07-15 13:20:05.782191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:09.227 [2024-07-15 13:20:05.782202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.227 [2024-07-15 13:20:05.782213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.227 [2024-07-15 13:20:05.782224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:09.227 [2024-07-15 13:20:05.782234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.227 [2024-07-15 13:20:05.782255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:09.227 [2024-07-15 13:20:05.782265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782275] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.227 [2024-07-15 13:20:05.782287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.227 [2024-07-15 13:20:05.782302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.227 [2024-07-15 13:20:05.782327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.227 [2024-07-15 13:20:05.782339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.227 [2024-07-15 13:20:05.782350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.227 [2024-07-15 13:20:05.782361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.227 [2024-07-15 13:20:05.782371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.227 [2024-07-15 13:20:05.782382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.227 [2024-07-15 13:20:05.782394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.227 [2024-07-15 13:20:05.782408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:09.227 [2024-07-15 13:20:05.782432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:09.227 [2024-07-15 13:20:05.782444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:09.227 [2024-07-15 13:20:05.782455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:09.227 [2024-07-15 13:20:05.782467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:09.227 [2024-07-15 13:20:05.782478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:09.227 [2024-07-15 13:20:05.782492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:09.227 [2024-07-15 13:20:05.782504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:09.227 [2024-07-15 13:20:05.782515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:09.227 [2024-07-15 13:20:05.782526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:09.227 [2024-07-15 13:20:05.782584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.227 [2024-07-15 13:20:05.782597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.227 [2024-07-15 13:20:05.782621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.227 [2024-07-15 13:20:05.782645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.227 [2024-07-15 13:20:05.782657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.227 [2024-07-15 13:20:05.782669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.782681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.227 [2024-07-15 13:20:05.782696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:21:09.227 [2024-07-15 13:20:05.782711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.807602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.807691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.227 [2024-07-15 13:20:05.807720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.749 ms 00:21:09.227 [2024-07-15 13:20:05.807737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.807955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.807980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.227 [2024-07-15 13:20:05.807997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:09.227 [2024-07-15 13:20:05.808012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.822042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.822121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.227 [2024-07-15 13:20:05.822166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.915 ms 00:21:09.227 [2024-07-15 13:20:05.822180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.822257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.822275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.227 [2024-07-15 13:20:05.822289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:09.227 [2024-07-15 13:20:05.822307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.822944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.822979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.227 [2024-07-15 13:20:05.823003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:21:09.227 [2024-07-15 13:20:05.823016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.823226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.823271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.227 [2024-07-15 13:20:05.823285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:21:09.227 [2024-07-15 13:20:05.823296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.831071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.831178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.227 [2024-07-15 13:20:05.831199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.736 ms 00:21:09.227 [2024-07-15 13:20:05.831212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.834276] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:09.227 [2024-07-15 13:20:05.834327] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:09.227 [2024-07-15 13:20:05.834352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.834370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:09.227 [2024-07-15 13:20:05.834385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.976 ms 00:21:09.227 [2024-07-15 13:20:05.834397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.850455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.227 [2024-07-15 13:20:05.850557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:09.227 [2024-07-15 13:20:05.850579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.948 ms 00:21:09.227 [2024-07-15 13:20:05.850592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.227 [2024-07-15 13:20:05.853680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.853730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:09.228 [2024-07-15 13:20:05.853747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:21:09.228 [2024-07-15 13:20:05.853759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.855479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.855520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:09.228 [2024-07-15 13:20:05.855536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:21:09.228 [2024-07-15 13:20:05.855547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.856023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.856061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:09.228 [2024-07-15 13:20:05.856080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:21:09.228 [2024-07-15 13:20:05.856092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.880098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.880211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:09.228 [2024-07-15 13:20:05.880236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.973 ms 00:21:09.228 [2024-07-15 13:20:05.880249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.889961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:09.228 [2024-07-15 13:20:05.894531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.894592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:09.228 [2024-07-15 13:20:05.894613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.184 ms 00:21:09.228 [2024-07-15 13:20:05.894625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.894761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.894782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:09.228 [2024-07-15 13:20:05.894800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:09.228 [2024-07-15 13:20:05.894812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.896957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.897003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:09.228 [2024-07-15 13:20:05.897024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:21:09.228 [2024-07-15 13:20:05.897035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.897084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.897115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.228 [2024-07-15 13:20:05.897176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:09.228 [2024-07-15 13:20:05.897191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.897240] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:09.228 [2024-07-15 13:20:05.897259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.897270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:09.228 [2024-07-15 13:20:05.897288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:09.228 [2024-07-15 13:20:05.897312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.901688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.901743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.228 [2024-07-15 13:20:05.901761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:21:09.228 [2024-07-15 13:20:05.901774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.901863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.228 [2024-07-15 13:20:05.901883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.228 [2024-07-15 13:20:05.901896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:09.228 [2024-07-15 13:20:05.901916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.228 [2024-07-15 13:20:05.910448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.799 ms, result 0 00:21:49.244  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (25 MBps) Copying: 73/1024 [MB] (25 MBps) Copying: 100/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (26 MBps) Copying: 152/1024 [MB] (24 MBps) Copying: 177/1024 [MB] (25 MBps) Copying: 203/1024 [MB] (25 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 256/1024 [MB] (27 MBps) Copying: 285/1024 [MB] (28 MBps) Copying: 313/1024 [MB] (28 MBps) Copying: 339/1024 [MB] (25 MBps) Copying: 366/1024 [MB] (26 MBps) Copying: 394/1024 [MB] (27 MBps) Copying: 420/1024 [MB] (25 MBps) Copying: 446/1024 [MB] (26 MBps) Copying: 472/1024 [MB] (25 MBps) Copying: 497/1024 [MB] (24 MBps) Copying: 525/1024 [MB] (27 MBps) Copying: 553/1024 [MB] (28 MBps) Copying: 579/1024 [MB] (25 MBps) Copying: 605/1024 [MB] (25 MBps) Copying: 632/1024 [MB] (27 MBps) Copying: 660/1024 [MB] (27 MBps) Copying: 686/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (24 MBps) Copying: 735/1024 [MB] (25 MBps) Copying: 760/1024 [MB] (25 MBps) Copying: 785/1024 [MB] (25 MBps) Copying: 809/1024 [MB] (24 MBps) Copying: 834/1024 [MB] (24 MBps) Copying: 859/1024 [MB] (25 MBps) Copying: 886/1024 [MB] (26 MBps) Copying: 912/1024 [MB] (25 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 988/1024 [MB] (24 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 13:20:45.797613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.797727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:49.244 [2024-07-15 13:20:45.797764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:49.244 [2024-07-15 13:20:45.797783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.797827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:49.244 [2024-07-15 13:20:45.798804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.798844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:49.244 [2024-07-15 13:20:45.798887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:21:49.244 [2024-07-15 13:20:45.798903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.799488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.799527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:49.244 [2024-07-15 13:20:45.799547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:49.244 [2024-07-15 13:20:45.799563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.805478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.805533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:49.244 [2024-07-15 13:20:45.805554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.884 ms 00:21:49.244 [2024-07-15 13:20:45.805579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.814286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.814354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:49.244 [2024-07-15 13:20:45.814375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.650 ms 00:21:49.244 [2024-07-15 13:20:45.814389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.816297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.816348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:49.244 [2024-07-15 13:20:45.816364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:21:49.244 [2024-07-15 13:20:45.816376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.820543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.820589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:49.244 [2024-07-15 13:20:45.820624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:21:49.244 [2024-07-15 13:20:45.820642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.921276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.921359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:49.244 [2024-07-15 13:20:45.921381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.582 ms 00:21:49.244 [2024-07-15 13:20:45.921393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.923939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.923983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:49.244 [2024-07-15 13:20:45.923999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:21:49.244 [2024-07-15 13:20:45.924010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.925438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.925477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:49.244 [2024-07-15 13:20:45.925492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:21:49.244 [2024-07-15 13:20:45.925503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.926805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.926843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:49.244 [2024-07-15 13:20:45.926857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.265 ms 00:21:49.244 [2024-07-15 13:20:45.926869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.928134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.244 [2024-07-15 13:20:45.928188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.244 [2024-07-15 13:20:45.928203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:21:49.244 [2024-07-15 13:20:45.928213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.244 [2024-07-15 13:20:45.928253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.244 [2024-07-15 13:20:45.928278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:49.244 [2024-07-15 13:20:45.928293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.244 [2024-07-15 13:20:45.928526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.928985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.245 [2024-07-15 13:20:45.929610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.245 [2024-07-15 13:20:45.929622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48704cf3-47fc-45de-bbc6-7542cac85d09 00:21:49.245 [2024-07-15 13:20:45.929635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:49.245 [2024-07-15 13:20:45.929654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11968 00:21:49.245 [2024-07-15 13:20:45.929666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11008 00:21:49.245 [2024-07-15 13:20:45.929678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0872 00:21:49.245 [2024-07-15 13:20:45.929689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.245 [2024-07-15 13:20:45.929711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.245 [2024-07-15 13:20:45.929723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.245 [2024-07-15 13:20:45.929733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.245 [2024-07-15 13:20:45.929743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.245 [2024-07-15 13:20:45.929755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.245 [2024-07-15 13:20:45.929767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.245 [2024-07-15 13:20:45.929780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:21:49.245 [2024-07-15 13:20:45.929791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.245 [2024-07-15 13:20:45.931965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.245 [2024-07-15 13:20:45.932002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.245 [2024-07-15 13:20:45.932017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:21:49.245 [2024-07-15 13:20:45.932029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.245 [2024-07-15 13:20:45.932177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.246 [2024-07-15 13:20:45.932197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.246 [2024-07-15 13:20:45.932211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:49.246 [2024-07-15 13:20:45.932232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.939364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.939419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.246 [2024-07-15 13:20:45.939436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.939448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.939532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.939547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.246 [2024-07-15 13:20:45.939560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.939579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.939680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.939704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.246 [2024-07-15 13:20:45.939717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.939728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.939752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.939766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.246 [2024-07-15 13:20:45.939777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.939788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.956125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.956214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.246 [2024-07-15 13:20:45.956235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.956247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.966693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.966766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.246 [2024-07-15 13:20:45.966787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.966799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.966903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.966921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.246 [2024-07-15 13:20:45.966934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.966946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.966994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.967009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.246 [2024-07-15 13:20:45.967021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.967032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.967128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.967169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.246 [2024-07-15 13:20:45.967183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.967194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.967245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.967264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:49.246 [2024-07-15 13:20:45.967277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.967288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.967334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.967358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.246 [2024-07-15 13:20:45.967370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.967382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.967435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.246 [2024-07-15 13:20:45.967451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.246 [2024-07-15 13:20:45.967464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.246 [2024-07-15 13:20:45.967475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.246 [2024-07-15 13:20:45.967648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 170.007 ms, result 0 00:21:49.811 00:21:49.811 00:21:49.811 13:20:46 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:52.341 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90220 00:21:52.341 13:20:48 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90220 ']' 00:21:52.341 13:20:48 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90220 00:21:52.341 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90220) - No such process 00:21:52.341 Process with pid 90220 is not found 00:21:52.341 13:20:48 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 90220 is not found' 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:52.341 Remove shared memory files 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:52.341 13:20:48 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:52.341 00:21:52.341 real 3m8.004s 00:21:52.341 user 2m53.383s 00:21:52.341 sys 0m16.595s 00:21:52.341 13:20:48 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:52.341 13:20:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:52.341 ************************************ 00:21:52.341 END TEST ftl_restore 00:21:52.341 ************************************ 00:21:52.341 13:20:48 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:52.341 13:20:48 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:52.341 13:20:48 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:52.341 13:20:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:52.341 ************************************ 00:21:52.341 START TEST ftl_dirty_shutdown 00:21:52.341 ************************************ 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:52.341 * Looking for test storage... 00:21:52.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92198 00:21:52.341 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92198 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 92198 ']' 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:52.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:52.342 13:20:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.342 [2024-07-15 13:20:48.928781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:52.342 [2024-07-15 13:20:48.928948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92198 ] 00:21:52.342 [2024-07-15 13:20:49.069231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.600 [2024-07-15 13:20:49.171490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:53.166 13:20:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:53.738 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:53.738 { 00:21:53.738 "name": "nvme0n1", 00:21:53.738 "aliases": [ 00:21:53.738 "82a9002a-32cd-4d84-b520-0c9de89f8780" 00:21:53.738 ], 00:21:53.738 "product_name": "NVMe disk", 00:21:53.738 "block_size": 4096, 00:21:53.738 "num_blocks": 1310720, 00:21:53.738 "uuid": "82a9002a-32cd-4d84-b520-0c9de89f8780", 00:21:53.738 "assigned_rate_limits": { 00:21:53.738 "rw_ios_per_sec": 0, 00:21:53.738 "rw_mbytes_per_sec": 0, 00:21:53.738 "r_mbytes_per_sec": 0, 00:21:53.738 "w_mbytes_per_sec": 0 00:21:53.738 }, 00:21:53.738 "claimed": true, 00:21:53.738 "claim_type": "read_many_write_one", 00:21:53.738 "zoned": false, 00:21:53.738 "supported_io_types": { 00:21:53.738 "read": true, 00:21:53.738 "write": true, 00:21:53.738 "unmap": true, 00:21:53.738 "write_zeroes": true, 00:21:53.738 "flush": true, 00:21:53.738 "reset": true, 00:21:53.738 "compare": true, 00:21:53.739 "compare_and_write": false, 00:21:53.739 "abort": true, 00:21:53.739 "nvme_admin": true, 00:21:53.739 "nvme_io": true 00:21:53.739 }, 00:21:53.739 "driver_specific": { 00:21:53.739 "nvme": [ 00:21:53.739 { 00:21:53.739 "pci_address": "0000:00:11.0", 00:21:53.739 "trid": { 00:21:53.739 "trtype": "PCIe", 00:21:53.739 "traddr": "0000:00:11.0" 00:21:53.739 }, 00:21:53.739 "ctrlr_data": { 00:21:53.739 "cntlid": 0, 00:21:53.739 "vendor_id": "0x1b36", 00:21:53.739 "model_number": "QEMU NVMe Ctrl", 00:21:53.739 "serial_number": "12341", 00:21:53.739 "firmware_revision": "8.0.0", 00:21:53.739 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:53.739 "oacs": { 00:21:53.739 "security": 0, 00:21:53.739 "format": 1, 00:21:53.739 "firmware": 0, 00:21:53.739 "ns_manage": 1 00:21:53.739 }, 00:21:53.739 "multi_ctrlr": false, 00:21:53.739 "ana_reporting": false 00:21:53.739 }, 00:21:53.739 "vs": { 00:21:53.739 "nvme_version": "1.4" 00:21:53.739 }, 00:21:53.739 "ns_data": { 00:21:53.739 "id": 1, 00:21:53.739 "can_share": false 00:21:53.739 } 00:21:53.739 } 00:21:53.739 ], 00:21:53.739 "mp_policy": "active_passive" 00:21:53.739 } 00:21:53.739 } 00:21:53.739 ]' 00:21:53.739 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:53.996 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:54.252 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=4ab0be0c-3459-4fc8-8870-e87de6e03d41 00:21:54.252 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:54.252 13:20:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ab0be0c-3459-4fc8-8870-e87de6e03d41 00:21:54.509 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:54.767 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9c11b478-07c2-42a8-b6c1-50dfc43cce37 00:21:54.767 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9c11b478-07c2-42a8-b6c1-50dfc43cce37 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=82c6777e-e670-4664-a3fa-70d852066626 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 82c6777e-e670-4664-a3fa-70d852066626 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=82c6777e-e670-4664-a3fa-70d852066626 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 82c6777e-e670-4664-a3fa-70d852066626 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=82c6777e-e670-4664-a3fa-70d852066626 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:55.024 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c6777e-e670-4664-a3fa-70d852066626 00:21:55.281 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:55.281 { 00:21:55.281 "name": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:55.281 "aliases": [ 00:21:55.281 "lvs/nvme0n1p0" 00:21:55.281 ], 00:21:55.281 "product_name": "Logical Volume", 00:21:55.281 "block_size": 4096, 00:21:55.281 "num_blocks": 26476544, 00:21:55.281 "uuid": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:55.281 "assigned_rate_limits": { 00:21:55.281 "rw_ios_per_sec": 0, 00:21:55.281 "rw_mbytes_per_sec": 0, 00:21:55.281 "r_mbytes_per_sec": 0, 00:21:55.281 "w_mbytes_per_sec": 0 00:21:55.281 }, 00:21:55.281 "claimed": false, 00:21:55.281 "zoned": false, 00:21:55.281 "supported_io_types": { 00:21:55.281 "read": true, 00:21:55.281 "write": true, 00:21:55.281 "unmap": true, 00:21:55.281 "write_zeroes": true, 00:21:55.281 "flush": false, 00:21:55.281 "reset": true, 00:21:55.281 "compare": false, 00:21:55.281 "compare_and_write": false, 00:21:55.281 "abort": false, 00:21:55.281 "nvme_admin": false, 00:21:55.281 "nvme_io": false 00:21:55.281 }, 00:21:55.281 "driver_specific": { 00:21:55.281 "lvol": { 00:21:55.281 "lvol_store_uuid": "9c11b478-07c2-42a8-b6c1-50dfc43cce37", 00:21:55.281 "base_bdev": "nvme0n1", 00:21:55.281 "thin_provision": true, 00:21:55.281 "num_allocated_clusters": 0, 00:21:55.281 "snapshot": false, 00:21:55.281 "clone": false, 00:21:55.281 "esnap_clone": false 00:21:55.281 } 00:21:55.281 } 00:21:55.281 } 00:21:55.281 ]' 00:21:55.281 13:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:55.540 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 82c6777e-e670-4664-a3fa-70d852066626 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=82c6777e-e670-4664-a3fa-70d852066626 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:55.797 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c6777e-e670-4664-a3fa-70d852066626 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:56.054 { 00:21:56.054 "name": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:56.054 "aliases": [ 00:21:56.054 "lvs/nvme0n1p0" 00:21:56.054 ], 00:21:56.054 "product_name": "Logical Volume", 00:21:56.054 "block_size": 4096, 00:21:56.054 "num_blocks": 26476544, 00:21:56.054 "uuid": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:56.054 "assigned_rate_limits": { 00:21:56.054 "rw_ios_per_sec": 0, 00:21:56.054 "rw_mbytes_per_sec": 0, 00:21:56.054 "r_mbytes_per_sec": 0, 00:21:56.054 "w_mbytes_per_sec": 0 00:21:56.054 }, 00:21:56.054 "claimed": false, 00:21:56.054 "zoned": false, 00:21:56.054 "supported_io_types": { 00:21:56.054 "read": true, 00:21:56.054 "write": true, 00:21:56.054 "unmap": true, 00:21:56.054 "write_zeroes": true, 00:21:56.054 "flush": false, 00:21:56.054 "reset": true, 00:21:56.054 "compare": false, 00:21:56.054 "compare_and_write": false, 00:21:56.054 "abort": false, 00:21:56.054 "nvme_admin": false, 00:21:56.054 "nvme_io": false 00:21:56.054 }, 00:21:56.054 "driver_specific": { 00:21:56.054 "lvol": { 00:21:56.054 "lvol_store_uuid": "9c11b478-07c2-42a8-b6c1-50dfc43cce37", 00:21:56.054 "base_bdev": "nvme0n1", 00:21:56.054 "thin_provision": true, 00:21:56.054 "num_allocated_clusters": 0, 00:21:56.054 "snapshot": false, 00:21:56.054 "clone": false, 00:21:56.054 "esnap_clone": false 00:21:56.054 } 00:21:56.054 } 00:21:56.054 } 00:21:56.054 ]' 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:56.054 13:20:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 82c6777e-e670-4664-a3fa-70d852066626 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=82c6777e-e670-4664-a3fa-70d852066626 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c6777e-e670-4664-a3fa-70d852066626 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:56.620 { 00:21:56.620 "name": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:56.620 "aliases": [ 00:21:56.620 "lvs/nvme0n1p0" 00:21:56.620 ], 00:21:56.620 "product_name": "Logical Volume", 00:21:56.620 "block_size": 4096, 00:21:56.620 "num_blocks": 26476544, 00:21:56.620 "uuid": "82c6777e-e670-4664-a3fa-70d852066626", 00:21:56.620 "assigned_rate_limits": { 00:21:56.620 "rw_ios_per_sec": 0, 00:21:56.620 "rw_mbytes_per_sec": 0, 00:21:56.620 "r_mbytes_per_sec": 0, 00:21:56.620 "w_mbytes_per_sec": 0 00:21:56.620 }, 00:21:56.620 "claimed": false, 00:21:56.620 "zoned": false, 00:21:56.620 "supported_io_types": { 00:21:56.620 "read": true, 00:21:56.620 "write": true, 00:21:56.620 "unmap": true, 00:21:56.620 "write_zeroes": true, 00:21:56.620 "flush": false, 00:21:56.620 "reset": true, 00:21:56.620 "compare": false, 00:21:56.620 "compare_and_write": false, 00:21:56.620 "abort": false, 00:21:56.620 "nvme_admin": false, 00:21:56.620 "nvme_io": false 00:21:56.620 }, 00:21:56.620 "driver_specific": { 00:21:56.620 "lvol": { 00:21:56.620 "lvol_store_uuid": "9c11b478-07c2-42a8-b6c1-50dfc43cce37", 00:21:56.620 "base_bdev": "nvme0n1", 00:21:56.620 "thin_provision": true, 00:21:56.620 "num_allocated_clusters": 0, 00:21:56.620 "snapshot": false, 00:21:56.620 "clone": false, 00:21:56.620 "esnap_clone": false 00:21:56.620 } 00:21:56.620 } 00:21:56.620 } 00:21:56.620 ]' 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:56.620 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 82c6777e-e670-4664-a3fa-70d852066626 --l2p_dram_limit 10' 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:56.879 13:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 82c6777e-e670-4664-a3fa-70d852066626 --l2p_dram_limit 10 -c nvc0n1p0 00:21:57.136 [2024-07-15 13:20:53.666142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.136 [2024-07-15 13:20:53.666239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.136 [2024-07-15 13:20:53.666269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:57.136 [2024-07-15 13:20:53.666284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.136 [2024-07-15 13:20:53.666392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.136 [2024-07-15 13:20:53.666412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.136 [2024-07-15 13:20:53.666442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:57.136 [2024-07-15 13:20:53.666458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.136 [2024-07-15 13:20:53.666501] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.136 [2024-07-15 13:20:53.666877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.136 [2024-07-15 13:20:53.666910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.136 [2024-07-15 13:20:53.666928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.137 [2024-07-15 13:20:53.666945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:21:57.137 [2024-07-15 13:20:53.666967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.667126] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cdf8a61f-a7d9-4127-9cbf-b91f3735d164 00:21:57.137 [2024-07-15 13:20:53.668956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.669004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:57.137 [2024-07-15 13:20:53.669022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:57.137 [2024-07-15 13:20:53.669043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.678784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.678868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.137 [2024-07-15 13:20:53.678890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.647 ms 00:21:57.137 [2024-07-15 13:20:53.678910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.679039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.679073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.137 [2024-07-15 13:20:53.679090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:57.137 [2024-07-15 13:20:53.679106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.679240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.679274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.137 [2024-07-15 13:20:53.679290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:57.137 [2024-07-15 13:20:53.679306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.679359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.137 [2024-07-15 13:20:53.681668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.681709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.137 [2024-07-15 13:20:53.681831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:21:57.137 [2024-07-15 13:20:53.681845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.681909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.681927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.137 [2024-07-15 13:20:53.681943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:57.137 [2024-07-15 13:20:53.681957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.681993] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:57.137 [2024-07-15 13:20:53.682206] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.137 [2024-07-15 13:20:53.682242] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.137 [2024-07-15 13:20:53.682261] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:57.137 [2024-07-15 13:20:53.682280] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.137 [2024-07-15 13:20:53.682295] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.137 [2024-07-15 13:20:53.682312] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.137 [2024-07-15 13:20:53.682333] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.137 [2024-07-15 13:20:53.682352] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.137 [2024-07-15 13:20:53.682365] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.137 [2024-07-15 13:20:53.682381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.682394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.137 [2024-07-15 13:20:53.682410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:21:57.137 [2024-07-15 13:20:53.682423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.682521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.682538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.137 [2024-07-15 13:20:53.682557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:57.137 [2024-07-15 13:20:53.682569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.682696] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.137 [2024-07-15 13:20:53.682715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.137 [2024-07-15 13:20:53.682741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.137 [2024-07-15 13:20:53.682754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.137 [2024-07-15 13:20:53.682782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.137 [2024-07-15 13:20:53.682808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.137 [2024-07-15 13:20:53.682822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.137 [2024-07-15 13:20:53.682848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.137 [2024-07-15 13:20:53.682859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.137 [2024-07-15 13:20:53.682873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.137 [2024-07-15 13:20:53.682885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.137 [2024-07-15 13:20:53.682902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.137 [2024-07-15 13:20:53.682913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.137 [2024-07-15 13:20:53.682939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.137 [2024-07-15 13:20:53.682952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.137 [2024-07-15 13:20:53.682979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.137 [2024-07-15 13:20:53.682991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.137 [2024-07-15 13:20:53.683018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.137 [2024-07-15 13:20:53.683060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.137 [2024-07-15 13:20:53.683099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.137 [2024-07-15 13:20:53.683154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.137 [2024-07-15 13:20:53.683185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.137 [2024-07-15 13:20:53.683198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.137 [2024-07-15 13:20:53.683222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.137 [2024-07-15 13:20:53.683234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.137 [2024-07-15 13:20:53.683248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.137 [2024-07-15 13:20:53.683259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.137 [2024-07-15 13:20:53.683286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.137 [2024-07-15 13:20:53.683300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683312] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.137 [2024-07-15 13:20:53.683327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.137 [2024-07-15 13:20:53.683341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.137 [2024-07-15 13:20:53.683376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.137 [2024-07-15 13:20:53.683390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.137 [2024-07-15 13:20:53.683402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.137 [2024-07-15 13:20:53.683418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.137 [2024-07-15 13:20:53.683430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.137 [2024-07-15 13:20:53.683445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.137 [2024-07-15 13:20:53.683463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.137 [2024-07-15 13:20:53.683481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.137 [2024-07-15 13:20:53.683527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.137 [2024-07-15 13:20:53.683540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.137 [2024-07-15 13:20:53.683555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.137 [2024-07-15 13:20:53.683567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.137 [2024-07-15 13:20:53.683582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.137 [2024-07-15 13:20:53.683595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.137 [2024-07-15 13:20:53.683612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.137 [2024-07-15 13:20:53.683625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.137 [2024-07-15 13:20:53.683640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.137 [2024-07-15 13:20:53.683709] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.137 [2024-07-15 13:20:53.683726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.137 [2024-07-15 13:20:53.683757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.137 [2024-07-15 13:20:53.683771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.137 [2024-07-15 13:20:53.683786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.137 [2024-07-15 13:20:53.683801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.137 [2024-07-15 13:20:53.683817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.137 [2024-07-15 13:20:53.683830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:21:57.137 [2024-07-15 13:20:53.683848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.137 [2024-07-15 13:20:53.683952] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:57.137 [2024-07-15 13:20:53.683976] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:00.414 [2024-07-15 13:20:56.445197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.445287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:00.414 [2024-07-15 13:20:56.445323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2761.256 ms 00:22:00.414 [2024-07-15 13:20:56.445342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.459967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.460053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.414 [2024-07-15 13:20:56.460078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:22:00.414 [2024-07-15 13:20:56.460096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.460260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.460294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.414 [2024-07-15 13:20:56.460324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:00.414 [2024-07-15 13:20:56.460342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.473860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.473937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.414 [2024-07-15 13:20:56.473961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.418 ms 00:22:00.414 [2024-07-15 13:20:56.473977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.474066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.474090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.414 [2024-07-15 13:20:56.474106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:00.414 [2024-07-15 13:20:56.474121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.474808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.474839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.414 [2024-07-15 13:20:56.474856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:22:00.414 [2024-07-15 13:20:56.474883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.475053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.475082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.414 [2024-07-15 13:20:56.475097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:00.414 [2024-07-15 13:20:56.475113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.484505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.484583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.414 [2024-07-15 13:20:56.484606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.358 ms 00:22:00.414 [2024-07-15 13:20:56.484635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.495797] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:00.414 [2024-07-15 13:20:56.500047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.500098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:00.414 [2024-07-15 13:20:56.500125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:22:00.414 [2024-07-15 13:20:56.500139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.566307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.566393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:00.414 [2024-07-15 13:20:56.566423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.079 ms 00:22:00.414 [2024-07-15 13:20:56.566442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.566713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.566735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:00.414 [2024-07-15 13:20:56.566754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:22:00.414 [2024-07-15 13:20:56.566768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.570520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.570570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:00.414 [2024-07-15 13:20:56.570594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:22:00.414 [2024-07-15 13:20:56.570623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.573738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.573784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:00.414 [2024-07-15 13:20:56.573809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:22:00.414 [2024-07-15 13:20:56.573822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.574419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.574456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.414 [2024-07-15 13:20:56.574497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:22:00.414 [2024-07-15 13:20:56.574526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.610276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.610363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:00.414 [2024-07-15 13:20:56.610393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.699 ms 00:22:00.414 [2024-07-15 13:20:56.610412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.615798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.615863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:00.414 [2024-07-15 13:20:56.615889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.307 ms 00:22:00.414 [2024-07-15 13:20:56.615903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.414 [2024-07-15 13:20:56.619812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.414 [2024-07-15 13:20:56.619865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:00.415 [2024-07-15 13:20:56.619889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:22:00.415 [2024-07-15 13:20:56.619902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-07-15 13:20:56.623933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-07-15 13:20:56.623983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:00.415 [2024-07-15 13:20:56.624008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.969 ms 00:22:00.415 [2024-07-15 13:20:56.624021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-07-15 13:20:56.624106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-07-15 13:20:56.624129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:00.415 [2024-07-15 13:20:56.624178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:00.415 [2024-07-15 13:20:56.624204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-07-15 13:20:56.624323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-07-15 13:20:56.624343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:00.415 [2024-07-15 13:20:56.624360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:00.415 [2024-07-15 13:20:56.624373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-07-15 13:20:56.625816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2959.175 ms, result 0 00:22:00.415 { 00:22:00.415 "name": "ftl0", 00:22:00.415 "uuid": "cdf8a61f-a7d9-4127-9cbf-b91f3735d164" 00:22:00.415 } 00:22:00.415 13:20:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:00.415 13:20:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:00.415 13:20:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:00.415 13:20:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:00.415 13:20:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:00.673 /dev/nbd0 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:00.673 1+0 records in 00:22:00.673 1+0 records out 00:22:00.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046758 s, 8.8 MB/s 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:22:00.673 13:20:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:00.673 [2024-07-15 13:20:57.311796] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:00.673 [2024-07-15 13:20:57.311986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92339 ] 00:22:00.932 [2024-07-15 13:20:57.464283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.932 [2024-07-15 13:20:57.566882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.504  Copying: 164/1024 [MB] (164 MBps) Copying: 329/1024 [MB] (165 MBps) Copying: 489/1024 [MB] (160 MBps) Copying: 656/1024 [MB] (166 MBps) Copying: 818/1024 [MB] (162 MBps) Copying: 985/1024 [MB] (166 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:22:07.504 00:22:07.504 13:21:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:10.028 13:21:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:10.028 [2024-07-15 13:21:06.518871] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:10.028 [2024-07-15 13:21:06.519116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92436 ] 00:22:10.028 [2024-07-15 13:21:06.672630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.285 [2024-07-15 13:21:06.779513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.247  Copying: 15/1024 [MB] (15 MBps) Copying: 30/1024 [MB] (14 MBps) Copying: 42/1024 [MB] (12 MBps) Copying: 58/1024 [MB] (15 MBps) Copying: 73/1024 [MB] (15 MBps) Copying: 88/1024 [MB] (15 MBps) Copying: 105/1024 [MB] (16 MBps) Copying: 121/1024 [MB] (16 MBps) Copying: 138/1024 [MB] (16 MBps) Copying: 154/1024 [MB] (16 MBps) Copying: 169/1024 [MB] (15 MBps) Copying: 184/1024 [MB] (15 MBps) Copying: 200/1024 [MB] (15 MBps) Copying: 215/1024 [MB] (15 MBps) Copying: 231/1024 [MB] (16 MBps) Copying: 247/1024 [MB] (15 MBps) Copying: 263/1024 [MB] (16 MBps) Copying: 279/1024 [MB] (16 MBps) Copying: 296/1024 [MB] (16 MBps) Copying: 312/1024 [MB] (16 MBps) Copying: 328/1024 [MB] (16 MBps) Copying: 344/1024 [MB] (16 MBps) Copying: 360/1024 [MB] (16 MBps) Copying: 376/1024 [MB] (15 MBps) Copying: 392/1024 [MB] (16 MBps) Copying: 409/1024 [MB] (16 MBps) Copying: 425/1024 [MB] (16 MBps) Copying: 440/1024 [MB] (15 MBps) Copying: 456/1024 [MB] (16 MBps) Copying: 472/1024 [MB] (15 MBps) Copying: 488/1024 [MB] (16 MBps) Copying: 504/1024 [MB] (15 MBps) Copying: 520/1024 [MB] (16 MBps) Copying: 536/1024 [MB] (16 MBps) Copying: 552/1024 [MB] (15 MBps) Copying: 568/1024 [MB] (15 MBps) Copying: 584/1024 [MB] (15 MBps) Copying: 600/1024 [MB] (15 MBps) Copying: 616/1024 [MB] (15 MBps) Copying: 631/1024 [MB] (15 MBps) Copying: 648/1024 [MB] (16 MBps) Copying: 664/1024 [MB] (16 MBps) Copying: 680/1024 [MB] (16 MBps) Copying: 696/1024 [MB] (16 MBps) Copying: 712/1024 [MB] (15 MBps) Copying: 728/1024 [MB] (16 MBps) Copying: 744/1024 [MB] (15 MBps) Copying: 759/1024 [MB] (15 MBps) Copying: 775/1024 [MB] (15 MBps) Copying: 790/1024 [MB] (15 MBps) Copying: 806/1024 [MB] (15 MBps) Copying: 822/1024 [MB] (15 MBps) Copying: 838/1024 [MB] (16 MBps) Copying: 854/1024 [MB] (16 MBps) Copying: 870/1024 [MB] (15 MBps) Copying: 886/1024 [MB] (15 MBps) Copying: 901/1024 [MB] (15 MBps) Copying: 917/1024 [MB] (15 MBps) Copying: 933/1024 [MB] (15 MBps) Copying: 948/1024 [MB] (15 MBps) Copying: 966/1024 [MB] (17 MBps) Copying: 981/1024 [MB] (15 MBps) Copying: 997/1024 [MB] (15 MBps) Copying: 1013/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:15.247 00:23:15.247 13:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:15.247 13:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:15.505 13:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:15.764 [2024-07-15 13:22:12.326072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.326170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.764 [2024-07-15 13:22:12.326197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.764 [2024-07-15 13:22:12.326214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.326254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.764 [2024-07-15 13:22:12.327138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.327187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.764 [2024-07-15 13:22:12.327213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:23:15.764 [2024-07-15 13:22:12.327226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.329120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.329177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.764 [2024-07-15 13:22:12.329210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:23:15.764 [2024-07-15 13:22:12.329224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.346215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.346318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.764 [2024-07-15 13:22:12.346351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.934 ms 00:23:15.764 [2024-07-15 13:22:12.346365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.353003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.353050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.764 [2024-07-15 13:22:12.353074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.552 ms 00:23:15.764 [2024-07-15 13:22:12.353088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.355212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.355257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.764 [2024-07-15 13:22:12.355283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:23:15.764 [2024-07-15 13:22:12.355296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.360047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.360107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.764 [2024-07-15 13:22:12.360132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:23:15.764 [2024-07-15 13:22:12.360164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.360332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.360353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.764 [2024-07-15 13:22:12.360392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:23:15.764 [2024-07-15 13:22:12.360406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.362407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.362447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:15.764 [2024-07-15 13:22:12.362468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:23:15.764 [2024-07-15 13:22:12.362481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.363924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.363964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:15.764 [2024-07-15 13:22:12.363992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:23:15.764 [2024-07-15 13:22:12.364005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.365189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.365349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.764 [2024-07-15 13:22:12.365382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:23:15.764 [2024-07-15 13:22:12.365408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.366618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.764 [2024-07-15 13:22:12.366659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.764 [2024-07-15 13:22:12.366680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:23:15.764 [2024-07-15 13:22:12.366693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.764 [2024-07-15 13:22:12.366743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.764 [2024-07-15 13:22:12.366786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.366988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.764 [2024-07-15 13:22:12.367366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.367989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.765 [2024-07-15 13:22:12.368376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.765 [2024-07-15 13:22:12.368394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cdf8a61f-a7d9-4127-9cbf-b91f3735d164 00:23:15.765 [2024-07-15 13:22:12.368407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:15.765 [2024-07-15 13:22:12.368422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:15.765 [2024-07-15 13:22:12.368434] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:15.765 [2024-07-15 13:22:12.368449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:15.765 [2024-07-15 13:22:12.368462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.765 [2024-07-15 13:22:12.368477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.765 [2024-07-15 13:22:12.368490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.765 [2024-07-15 13:22:12.368505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.765 [2024-07-15 13:22:12.368517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.765 [2024-07-15 13:22:12.368533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.765 [2024-07-15 13:22:12.368546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.765 [2024-07-15 13:22:12.368562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:23:15.765 [2024-07-15 13:22:12.368578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.370872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.765 [2024-07-15 13:22:12.370904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.765 [2024-07-15 13:22:12.370927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:23:15.765 [2024-07-15 13:22:12.370941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.371084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.765 [2024-07-15 13:22:12.371106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.765 [2024-07-15 13:22:12.371123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:15.765 [2024-07-15 13:22:12.371174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.379819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.765 [2024-07-15 13:22:12.380119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.765 [2024-07-15 13:22:12.380265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.765 [2024-07-15 13:22:12.380379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.380528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.765 [2024-07-15 13:22:12.380594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.765 [2024-07-15 13:22:12.380769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.765 [2024-07-15 13:22:12.380826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.381046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.765 [2024-07-15 13:22:12.381222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.765 [2024-07-15 13:22:12.381371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.765 [2024-07-15 13:22:12.381426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.765 [2024-07-15 13:22:12.381560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.381689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.766 [2024-07-15 13:22:12.381811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.381935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.399319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.399643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.766 [2024-07-15 13:22:12.399776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.399831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.410341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.410645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.766 [2024-07-15 13:22:12.410783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.410839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.411067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.411122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.766 [2024-07-15 13:22:12.411338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.411489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.411625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.411693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.766 [2024-07-15 13:22:12.411831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.411887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.412081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.412191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.766 [2024-07-15 13:22:12.412310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.412452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.412567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.412631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:15.766 [2024-07-15 13:22:12.412752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.412808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.412961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.413089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.766 [2024-07-15 13:22:12.413167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.413257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.413418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.766 [2024-07-15 13:22:12.413452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.766 [2024-07-15 13:22:12.413473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.766 [2024-07-15 13:22:12.413500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.766 [2024-07-15 13:22:12.413727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.570 ms, result 0 00:23:15.766 true 00:23:15.766 13:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92198 00:23:15.766 13:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92198 00:23:15.766 13:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:16.024 [2024-07-15 13:22:12.556057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:16.024 [2024-07-15 13:22:12.556330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93096 ] 00:23:16.024 [2024-07-15 13:22:12.711134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.283 [2024-07-15 13:22:12.813354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.880  Copying: 165/1024 [MB] (165 MBps) Copying: 332/1024 [MB] (167 MBps) Copying: 498/1024 [MB] (165 MBps) Copying: 663/1024 [MB] (165 MBps) Copying: 827/1024 [MB] (163 MBps) Copying: 988/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:23:22.880 00:23:22.880 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92198 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:22.880 13:22:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:22.880 [2024-07-15 13:22:19.526465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:22.880 [2024-07-15 13:22:19.526735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93170 ] 00:23:23.138 [2024-07-15 13:22:19.675263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.138 [2024-07-15 13:22:19.774438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.395 [2024-07-15 13:22:19.904377] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:23.395 [2024-07-15 13:22:19.904473] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:23.395 [2024-07-15 13:22:19.967277] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:23.395 [2024-07-15 13:22:19.967602] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:23.395 [2024-07-15 13:22:19.967926] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:23.654 [2024-07-15 13:22:20.227398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.227477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:23.654 [2024-07-15 13:22:20.227500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:23.654 [2024-07-15 13:22:20.227513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.227615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.227639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.654 [2024-07-15 13:22:20.227652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:23.654 [2024-07-15 13:22:20.227665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.227709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:23.654 [2024-07-15 13:22:20.228051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:23.654 [2024-07-15 13:22:20.228079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.228092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.654 [2024-07-15 13:22:20.228118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:23:23.654 [2024-07-15 13:22:20.228171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.230115] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:23.654 [2024-07-15 13:22:20.233180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.233225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:23.654 [2024-07-15 13:22:20.233243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.067 ms 00:23:23.654 [2024-07-15 13:22:20.233255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.233361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.233387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:23.654 [2024-07-15 13:22:20.233401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:23.654 [2024-07-15 13:22:20.233424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.242169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.242234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.654 [2024-07-15 13:22:20.242254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.655 ms 00:23:23.654 [2024-07-15 13:22:20.242267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.242422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.242443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.654 [2024-07-15 13:22:20.242467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:23.654 [2024-07-15 13:22:20.242490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.242598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.242631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:23.654 [2024-07-15 13:22:20.242645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:23.654 [2024-07-15 13:22:20.242666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.242704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:23.654 [2024-07-15 13:22:20.244836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.244875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.654 [2024-07-15 13:22:20.244891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:23:23.654 [2024-07-15 13:22:20.244911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.244974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.654 [2024-07-15 13:22:20.244991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:23.654 [2024-07-15 13:22:20.245004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:23.654 [2024-07-15 13:22:20.245016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.654 [2024-07-15 13:22:20.245056] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:23.654 [2024-07-15 13:22:20.245090] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:23.654 [2024-07-15 13:22:20.245204] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:23.654 [2024-07-15 13:22:20.245251] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:23.655 [2024-07-15 13:22:20.245372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:23.655 [2024-07-15 13:22:20.245405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:23.655 [2024-07-15 13:22:20.245420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:23.655 [2024-07-15 13:22:20.245436] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:23.655 [2024-07-15 13:22:20.245460] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:23.655 [2024-07-15 13:22:20.245473] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:23.655 [2024-07-15 13:22:20.245493] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:23.655 [2024-07-15 13:22:20.245504] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:23.655 [2024-07-15 13:22:20.245523] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:23.655 [2024-07-15 13:22:20.245537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.655 [2024-07-15 13:22:20.245549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:23.655 [2024-07-15 13:22:20.245561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:23:23.655 [2024-07-15 13:22:20.245572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.655 [2024-07-15 13:22:20.245667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.655 [2024-07-15 13:22:20.245688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:23.655 [2024-07-15 13:22:20.245712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:23.655 [2024-07-15 13:22:20.245723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.655 [2024-07-15 13:22:20.245843] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:23.655 [2024-07-15 13:22:20.245862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:23.655 [2024-07-15 13:22:20.245875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:23.655 [2024-07-15 13:22:20.245898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.245911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:23.655 [2024-07-15 13:22:20.245922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.245932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:23.655 [2024-07-15 13:22:20.245945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:23.655 [2024-07-15 13:22:20.245956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:23.655 [2024-07-15 13:22:20.245968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:23.655 [2024-07-15 13:22:20.245979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:23.655 [2024-07-15 13:22:20.245989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:23.655 [2024-07-15 13:22:20.245999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:23.655 [2024-07-15 13:22:20.246032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:23.655 [2024-07-15 13:22:20.246046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:23.655 [2024-07-15 13:22:20.246057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:23.655 [2024-07-15 13:22:20.246089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:23.655 [2024-07-15 13:22:20.246121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:23.655 [2024-07-15 13:22:20.246172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246183] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:23.655 [2024-07-15 13:22:20.246206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:23.655 [2024-07-15 13:22:20.246256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:23.655 [2024-07-15 13:22:20.246292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:23.655 [2024-07-15 13:22:20.246314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:23.655 [2024-07-15 13:22:20.246325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:23.655 [2024-07-15 13:22:20.246335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:23.655 [2024-07-15 13:22:20.246346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:23.655 [2024-07-15 13:22:20.246356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:23.655 [2024-07-15 13:22:20.246367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:23.655 [2024-07-15 13:22:20.246389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:23.655 [2024-07-15 13:22:20.246399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246410] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:23.655 [2024-07-15 13:22:20.246421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:23.655 [2024-07-15 13:22:20.246436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.655 [2024-07-15 13:22:20.246460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:23.655 [2024-07-15 13:22:20.246471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:23.655 [2024-07-15 13:22:20.246482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:23.655 [2024-07-15 13:22:20.246493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:23.655 [2024-07-15 13:22:20.246503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:23.655 [2024-07-15 13:22:20.246514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:23.655 [2024-07-15 13:22:20.246526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:23.655 [2024-07-15 13:22:20.246542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:23.655 [2024-07-15 13:22:20.246567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:23.655 [2024-07-15 13:22:20.246578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:23.655 [2024-07-15 13:22:20.246590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:23.655 [2024-07-15 13:22:20.246602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:23.655 [2024-07-15 13:22:20.246614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:23.655 [2024-07-15 13:22:20.246629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:23.655 [2024-07-15 13:22:20.246642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:23.655 [2024-07-15 13:22:20.246654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:23.655 [2024-07-15 13:22:20.246666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:23.655 [2024-07-15 13:22:20.246738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:23.655 [2024-07-15 13:22:20.246755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:23.655 [2024-07-15 13:22:20.246780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:23.655 [2024-07-15 13:22:20.246791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:23.655 [2024-07-15 13:22:20.246803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:23.655 [2024-07-15 13:22:20.246816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.655 [2024-07-15 13:22:20.246828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:23.655 [2024-07-15 13:22:20.246853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:23:23.655 [2024-07-15 13:22:20.246866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.655 [2024-07-15 13:22:20.270503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.270578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.656 [2024-07-15 13:22:20.270601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:23:23.656 [2024-07-15 13:22:20.270635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.270813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.270838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:23.656 [2024-07-15 13:22:20.270852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:23.656 [2024-07-15 13:22:20.270864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.283877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.283949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.656 [2024-07-15 13:22:20.283971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.894 ms 00:23:23.656 [2024-07-15 13:22:20.283984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.284068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.284085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.656 [2024-07-15 13:22:20.284099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:23.656 [2024-07-15 13:22:20.284111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.284781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.284824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.656 [2024-07-15 13:22:20.284844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:23:23.656 [2024-07-15 13:22:20.284856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.285035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.285059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.656 [2024-07-15 13:22:20.285072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:23.656 [2024-07-15 13:22:20.285084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.292975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.293037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.656 [2024-07-15 13:22:20.293056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:23:23.656 [2024-07-15 13:22:20.293070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.296353] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:23.656 [2024-07-15 13:22:20.296405] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:23.656 [2024-07-15 13:22:20.296427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.296445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:23.656 [2024-07-15 13:22:20.296461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:23:23.656 [2024-07-15 13:22:20.296473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.312404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.312510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:23.656 [2024-07-15 13:22:20.312534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:23:23.656 [2024-07-15 13:22:20.312564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.315594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.315642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:23.656 [2024-07-15 13:22:20.315660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.930 ms 00:23:23.656 [2024-07-15 13:22:20.315672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.317328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.317369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:23.656 [2024-07-15 13:22:20.317386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:23:23.656 [2024-07-15 13:22:20.317398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.317903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.317944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:23.656 [2024-07-15 13:22:20.317962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:23:23.656 [2024-07-15 13:22:20.317974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.341510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.341596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:23.656 [2024-07-15 13:22:20.341619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.509 ms 00:23:23.656 [2024-07-15 13:22:20.341632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.350394] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:23.656 [2024-07-15 13:22:20.354795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.354842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:23.656 [2024-07-15 13:22:20.354862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.064 ms 00:23:23.656 [2024-07-15 13:22:20.354876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.355008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.355033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:23.656 [2024-07-15 13:22:20.355052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:23.656 [2024-07-15 13:22:20.355069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.355189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.355210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:23.656 [2024-07-15 13:22:20.355223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:23.656 [2024-07-15 13:22:20.355235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.355272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.355288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:23.656 [2024-07-15 13:22:20.355300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:23.656 [2024-07-15 13:22:20.355341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.355395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:23.656 [2024-07-15 13:22:20.355412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.355425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:23.656 [2024-07-15 13:22:20.355441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:23.656 [2024-07-15 13:22:20.355454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.359882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.359933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:23.656 [2024-07-15 13:22:20.359951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.398 ms 00:23:23.656 [2024-07-15 13:22:20.359964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.360060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.656 [2024-07-15 13:22:20.360079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:23.656 [2024-07-15 13:22:20.360093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:23.656 [2024-07-15 13:22:20.360105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.656 [2024-07-15 13:22:20.361472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.536 ms, result 0 00:24:04.766  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (26 MBps) Copying: 77/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (26 MBps) Copying: 126/1024 [MB] (22 MBps) Copying: 153/1024 [MB] (26 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 231/1024 [MB] (26 MBps) Copying: 257/1024 [MB] (26 MBps) Copying: 283/1024 [MB] (25 MBps) Copying: 309/1024 [MB] (26 MBps) Copying: 335/1024 [MB] (25 MBps) Copying: 360/1024 [MB] (25 MBps) Copying: 386/1024 [MB] (25 MBps) Copying: 412/1024 [MB] (25 MBps) Copying: 438/1024 [MB] (26 MBps) Copying: 465/1024 [MB] (26 MBps) Copying: 490/1024 [MB] (25 MBps) Copying: 516/1024 [MB] (26 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 567/1024 [MB] (24 MBps) Copying: 593/1024 [MB] (25 MBps) Copying: 619/1024 [MB] (25 MBps) Copying: 645/1024 [MB] (25 MBps) Copying: 670/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (25 MBps) Copying: 721/1024 [MB] (25 MBps) Copying: 747/1024 [MB] (25 MBps) Copying: 773/1024 [MB] (25 MBps) Copying: 798/1024 [MB] (25 MBps) Copying: 825/1024 [MB] (27 MBps) Copying: 851/1024 [MB] (25 MBps) Copying: 877/1024 [MB] (25 MBps) Copying: 903/1024 [MB] (26 MBps) Copying: 928/1024 [MB] (24 MBps) Copying: 954/1024 [MB] (25 MBps) Copying: 980/1024 [MB] (25 MBps) Copying: 1005/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 13:23:01.271784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-07-15 13:23:01.271870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.766 [2024-07-15 13:23:01.271894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:04.766 [2024-07-15 13:23:01.271907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-07-15 13:23:01.272975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.766 [2024-07-15 13:23:01.276027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-07-15 13:23:01.276094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.766 [2024-07-15 13:23:01.276114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:24:04.766 [2024-07-15 13:23:01.276126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-07-15 13:23:01.289297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-07-15 13:23:01.289365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.766 [2024-07-15 13:23:01.289386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.696 ms 00:24:04.766 [2024-07-15 13:23:01.289423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-07-15 13:23:01.311868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-07-15 13:23:01.311953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.766 [2024-07-15 13:23:01.311974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.407 ms 00:24:04.766 [2024-07-15 13:23:01.311987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-07-15 13:23:01.318480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-07-15 13:23:01.318541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.766 [2024-07-15 13:23:01.318559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.449 ms 00:24:04.766 [2024-07-15 13:23:01.318571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-07-15 13:23:01.320490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.320532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.767 [2024-07-15 13:23:01.320548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:24:04.767 [2024-07-15 13:23:01.320560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.324526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.324570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.767 [2024-07-15 13:23:01.324603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:24:04.767 [2024-07-15 13:23:01.324615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.433555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.433634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.767 [2024-07-15 13:23:01.433656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.889 ms 00:24:04.767 [2024-07-15 13:23:01.433668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.436232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.436275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:04.767 [2024-07-15 13:23:01.436294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:24:04.767 [2024-07-15 13:23:01.436306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.437691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.437729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:04.767 [2024-07-15 13:23:01.437745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:24:04.767 [2024-07-15 13:23:01.437756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.439097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.439138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.767 [2024-07-15 13:23:01.439171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:24:04.767 [2024-07-15 13:23:01.439182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.440308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-07-15 13:23:01.440346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.767 [2024-07-15 13:23:01.440361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:24:04.767 [2024-07-15 13:23:01.440372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.767 [2024-07-15 13:23:01.440409] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.767 [2024-07-15 13:23:01.440443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:24:04.767 [2024-07-15 13:23:01.440459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.440997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-07-15 13:23:01.441422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.768 [2024-07-15 13:23:01.441774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.768 [2024-07-15 13:23:01.441787] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cdf8a61f-a7d9-4127-9cbf-b91f3735d164 00:24:04.768 [2024-07-15 13:23:01.441799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:24:04.768 [2024-07-15 13:23:01.441811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:24:04.768 [2024-07-15 13:23:01.441823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:24:04.768 [2024-07-15 13:23:01.441836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:04.768 [2024-07-15 13:23:01.441847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.768 [2024-07-15 13:23:01.441860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.768 [2024-07-15 13:23:01.441871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.768 [2024-07-15 13:23:01.441882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.768 [2024-07-15 13:23:01.441892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.768 [2024-07-15 13:23:01.441905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.768 [2024-07-15 13:23:01.441929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.768 [2024-07-15 13:23:01.441947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:24:04.768 [2024-07-15 13:23:01.441959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.444179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.768 [2024-07-15 13:23:01.444223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.768 [2024-07-15 13:23:01.444239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:24:04.768 [2024-07-15 13:23:01.444260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.444407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.768 [2024-07-15 13:23:01.444423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.768 [2024-07-15 13:23:01.444436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:04.768 [2024-07-15 13:23:01.444458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.451621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.451675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.768 [2024-07-15 13:23:01.451692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.451704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.451795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.451811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.768 [2024-07-15 13:23:01.451824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.451836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.451922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.451942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.768 [2024-07-15 13:23:01.451955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.451966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.452006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.452026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.768 [2024-07-15 13:23:01.452038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.452050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.468027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.468102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.768 [2024-07-15 13:23:01.468122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.468134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.478492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.478564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.768 [2024-07-15 13:23:01.478583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.478596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.478674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.478692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.768 [2024-07-15 13:23:01.478705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.478720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.478787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.478802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.768 [2024-07-15 13:23:01.478821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.478851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.478950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.478970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.768 [2024-07-15 13:23:01.478984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.478996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.479050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.479068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:04.768 [2024-07-15 13:23:01.479082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.479101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.479169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.479187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.768 [2024-07-15 13:23:01.479200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.479212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.479273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.768 [2024-07-15 13:23:01.479290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.768 [2024-07-15 13:23:01.479318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.768 [2024-07-15 13:23:01.479330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.768 [2024-07-15 13:23:01.479490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.713 ms, result 0 00:24:05.701 00:24:05.701 00:24:05.701 13:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:07.604 13:23:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:07.863 [2024-07-15 13:23:04.386087] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:07.863 [2024-07-15 13:23:04.386302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93613 ] 00:24:07.863 [2024-07-15 13:23:04.531106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.121 [2024-07-15 13:23:04.631922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.121 [2024-07-15 13:23:04.760543] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.121 [2024-07-15 13:23:04.760990] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.380 [2024-07-15 13:23:04.915892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.915969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.380 [2024-07-15 13:23:04.915993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.380 [2024-07-15 13:23:04.916005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.916088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.916107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.380 [2024-07-15 13:23:04.916120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:08.380 [2024-07-15 13:23:04.916136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.916197] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.380 [2024-07-15 13:23:04.916692] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.380 [2024-07-15 13:23:04.916732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.916751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.380 [2024-07-15 13:23:04.916764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:24:08.380 [2024-07-15 13:23:04.916775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.918717] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.380 [2024-07-15 13:23:04.921659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.921705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.380 [2024-07-15 13:23:04.921730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:24:08.380 [2024-07-15 13:23:04.921742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.921818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.921837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.380 [2024-07-15 13:23:04.921850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:08.380 [2024-07-15 13:23:04.921874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.930499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.930575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.380 [2024-07-15 13:23:04.930594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.538 ms 00:24:08.380 [2024-07-15 13:23:04.930618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.930767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.930787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.380 [2024-07-15 13:23:04.930810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:08.380 [2024-07-15 13:23:04.930821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.930922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.380 [2024-07-15 13:23:04.930941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.380 [2024-07-15 13:23:04.930964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:08.380 [2024-07-15 13:23:04.930975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.380 [2024-07-15 13:23:04.931012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.381 [2024-07-15 13:23:04.933156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.381 [2024-07-15 13:23:04.933214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.381 [2024-07-15 13:23:04.933231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:24:08.381 [2024-07-15 13:23:04.933242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.381 [2024-07-15 13:23:04.933296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.381 [2024-07-15 13:23:04.933311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.381 [2024-07-15 13:23:04.933328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:08.381 [2024-07-15 13:23:04.933339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.381 [2024-07-15 13:23:04.933391] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.381 [2024-07-15 13:23:04.933423] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.381 [2024-07-15 13:23:04.933481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.381 [2024-07-15 13:23:04.933510] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:08.381 [2024-07-15 13:23:04.933615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.381 [2024-07-15 13:23:04.933637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.381 [2024-07-15 13:23:04.933651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:08.381 [2024-07-15 13:23:04.933677] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.381 [2024-07-15 13:23:04.933694] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.381 [2024-07-15 13:23:04.933707] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.381 [2024-07-15 13:23:04.933718] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.381 [2024-07-15 13:23:04.933739] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.381 [2024-07-15 13:23:04.933750] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.381 [2024-07-15 13:23:04.933762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.381 [2024-07-15 13:23:04.933773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.381 [2024-07-15 13:23:04.933785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:24:08.381 [2024-07-15 13:23:04.933802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.381 [2024-07-15 13:23:04.933896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.381 [2024-07-15 13:23:04.933911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.381 [2024-07-15 13:23:04.933934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:08.381 [2024-07-15 13:23:04.933945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.381 [2024-07-15 13:23:04.934065] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.381 [2024-07-15 13:23:04.934083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.381 [2024-07-15 13:23:04.934095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.381 [2024-07-15 13:23:04.934141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.381 [2024-07-15 13:23:04.934199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.381 [2024-07-15 13:23:04.934221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.381 [2024-07-15 13:23:04.934231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.381 [2024-07-15 13:23:04.934241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.381 [2024-07-15 13:23:04.934252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.381 [2024-07-15 13:23:04.934262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.381 [2024-07-15 13:23:04.934274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.381 [2024-07-15 13:23:04.934295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.381 [2024-07-15 13:23:04.934333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.381 [2024-07-15 13:23:04.934364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.381 [2024-07-15 13:23:04.934395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.381 [2024-07-15 13:23:04.934427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.381 [2024-07-15 13:23:04.934458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.381 [2024-07-15 13:23:04.934478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.381 [2024-07-15 13:23:04.934492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.381 [2024-07-15 13:23:04.934503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.381 [2024-07-15 13:23:04.934513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.381 [2024-07-15 13:23:04.934524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.381 [2024-07-15 13:23:04.934535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.381 [2024-07-15 13:23:04.934556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.381 [2024-07-15 13:23:04.934567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934580] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.381 [2024-07-15 13:23:04.934592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.381 [2024-07-15 13:23:04.934603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.381 [2024-07-15 13:23:04.934628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.381 [2024-07-15 13:23:04.934639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.381 [2024-07-15 13:23:04.934650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.381 [2024-07-15 13:23:04.934661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.381 [2024-07-15 13:23:04.934675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.381 [2024-07-15 13:23:04.934686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.381 [2024-07-15 13:23:04.934699] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.381 [2024-07-15 13:23:04.934713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.381 [2024-07-15 13:23:04.934738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.381 [2024-07-15 13:23:04.934750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.381 [2024-07-15 13:23:04.934762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.381 [2024-07-15 13:23:04.934774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.381 [2024-07-15 13:23:04.934785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.381 [2024-07-15 13:23:04.934797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.381 [2024-07-15 13:23:04.934808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.381 [2024-07-15 13:23:04.934820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.381 [2024-07-15 13:23:04.934831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.381 [2024-07-15 13:23:04.934894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.381 [2024-07-15 13:23:04.934908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.381 [2024-07-15 13:23:04.934934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.381 [2024-07-15 13:23:04.934959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.381 [2024-07-15 13:23:04.934971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.382 [2024-07-15 13:23:04.934984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.934996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.382 [2024-07-15 13:23:04.935008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:24:08.382 [2024-07-15 13:23:04.935024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.959420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.959493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.382 [2024-07-15 13:23:04.959535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.297 ms 00:24:08.382 [2024-07-15 13:23:04.959547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.959691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.959708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.382 [2024-07-15 13:23:04.959734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:08.382 [2024-07-15 13:23:04.959746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.972486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.972566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.382 [2024-07-15 13:23:04.972589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.631 ms 00:24:08.382 [2024-07-15 13:23:04.972600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.972675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.972691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.382 [2024-07-15 13:23:04.972712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.382 [2024-07-15 13:23:04.972736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.973384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.973413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.382 [2024-07-15 13:23:04.973428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:08.382 [2024-07-15 13:23:04.973439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.973620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.973639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.382 [2024-07-15 13:23:04.973652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:24:08.382 [2024-07-15 13:23:04.973668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.981398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.981461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.382 [2024-07-15 13:23:04.981481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.695 ms 00:24:08.382 [2024-07-15 13:23:04.981494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:04.984625] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:08.382 [2024-07-15 13:23:04.984672] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.382 [2024-07-15 13:23:04.984699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:04.984712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.382 [2024-07-15 13:23:04.984725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:24:08.382 [2024-07-15 13:23:04.984736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.003059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.003171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.382 [2024-07-15 13:23:05.003195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.268 ms 00:24:08.382 [2024-07-15 13:23:05.003208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.006274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.006325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.382 [2024-07-15 13:23:05.006342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:24:08.382 [2024-07-15 13:23:05.006365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.007902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.007943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.382 [2024-07-15 13:23:05.007960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:24:08.382 [2024-07-15 13:23:05.007977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.008474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.008499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.382 [2024-07-15 13:23:05.008513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:24:08.382 [2024-07-15 13:23:05.008525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.031702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.031788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.382 [2024-07-15 13:23:05.031823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.144 ms 00:24:08.382 [2024-07-15 13:23:05.031838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.040504] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.382 [2024-07-15 13:23:05.044938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.044980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.382 [2024-07-15 13:23:05.045001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.024 ms 00:24:08.382 [2024-07-15 13:23:05.045012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.045141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.045198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.382 [2024-07-15 13:23:05.045220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:08.382 [2024-07-15 13:23:05.045231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.047452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.047504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.382 [2024-07-15 13:23:05.047538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:24:08.382 [2024-07-15 13:23:05.047550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.047599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.047614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.382 [2024-07-15 13:23:05.047627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.382 [2024-07-15 13:23:05.047638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.047684] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.382 [2024-07-15 13:23:05.047700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.047728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.382 [2024-07-15 13:23:05.047745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:08.382 [2024-07-15 13:23:05.047767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.052237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.052299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.382 [2024-07-15 13:23:05.052318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.434 ms 00:24:08.382 [2024-07-15 13:23:05.052331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.052418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.382 [2024-07-15 13:23:05.052451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.382 [2024-07-15 13:23:05.052466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:08.382 [2024-07-15 13:23:05.052483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.382 [2024-07-15 13:23:05.059994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.710 ms, result 0 00:24:46.698  Copying: 864/1048576 [kB] (864 kBps) Copying: 4260/1048576 [kB] (3396 kBps) Copying: 25/1024 [MB] (21 MBps) Copying: 54/1024 [MB] (28 MBps) Copying: 82/1024 [MB] (28 MBps) Copying: 111/1024 [MB] (29 MBps) Copying: 140/1024 [MB] (29 MBps) Copying: 169/1024 [MB] (29 MBps) Copying: 198/1024 [MB] (28 MBps) Copying: 228/1024 [MB] (29 MBps) Copying: 257/1024 [MB] (29 MBps) Copying: 286/1024 [MB] (28 MBps) Copying: 314/1024 [MB] (28 MBps) Copying: 343/1024 [MB] (28 MBps) Copying: 371/1024 [MB] (27 MBps) Copying: 400/1024 [MB] (29 MBps) Copying: 429/1024 [MB] (29 MBps) Copying: 458/1024 [MB] (28 MBps) Copying: 487/1024 [MB] (29 MBps) Copying: 515/1024 [MB] (28 MBps) Copying: 545/1024 [MB] (29 MBps) Copying: 574/1024 [MB] (29 MBps) Copying: 603/1024 [MB] (28 MBps) Copying: 632/1024 [MB] (28 MBps) Copying: 660/1024 [MB] (27 MBps) Copying: 689/1024 [MB] (29 MBps) Copying: 718/1024 [MB] (29 MBps) Copying: 747/1024 [MB] (28 MBps) Copying: 777/1024 [MB] (29 MBps) Copying: 807/1024 [MB] (30 MBps) Copying: 835/1024 [MB] (27 MBps) Copying: 863/1024 [MB] (28 MBps) Copying: 892/1024 [MB] (28 MBps) Copying: 920/1024 [MB] (28 MBps) Copying: 949/1024 [MB] (28 MBps) Copying: 978/1024 [MB] (28 MBps) Copying: 1007/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-15 13:23:43.217480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.217572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.698 [2024-07-15 13:23:43.217607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.698 [2024-07-15 13:23:43.217620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.217954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.698 [2024-07-15 13:23:43.218916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.218956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.698 [2024-07-15 13:23:43.218974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:24:46.698 [2024-07-15 13:23:43.218986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.219261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.219285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.698 [2024-07-15 13:23:43.219299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:24:46.698 [2024-07-15 13:23:43.219310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.231279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.231384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.698 [2024-07-15 13:23:43.231422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.937 ms 00:24:46.698 [2024-07-15 13:23:43.231435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.238818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.238916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.698 [2024-07-15 13:23:43.238937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.327 ms 00:24:46.698 [2024-07-15 13:23:43.238950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.241382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.241432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.698 [2024-07-15 13:23:43.241449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:24:46.698 [2024-07-15 13:23:43.241460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.245433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.245503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.698 [2024-07-15 13:23:43.245522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:24:46.698 [2024-07-15 13:23:43.245534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.249229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.249286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.698 [2024-07-15 13:23:43.249303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:24:46.698 [2024-07-15 13:23:43.249315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.251214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.251255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:46.698 [2024-07-15 13:23:43.251271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:24:46.698 [2024-07-15 13:23:43.251282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.252655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.252694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:46.698 [2024-07-15 13:23:43.252709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:24:46.698 [2024-07-15 13:23:43.252720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.253890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.253928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:46.698 [2024-07-15 13:23:43.253943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:24:46.698 [2024-07-15 13:23:43.253953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.255124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.698 [2024-07-15 13:23:43.255186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:46.698 [2024-07-15 13:23:43.255203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:24:46.698 [2024-07-15 13:23:43.255214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.698 [2024-07-15 13:23:43.255251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.698 [2024-07-15 13:23:43.255274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:46.698 [2024-07-15 13:23:43.255289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:24:46.698 [2024-07-15 13:23:43.255302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.698 [2024-07-15 13:23:43.255775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.255997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.699 [2024-07-15 13:23:43.256555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.699 [2024-07-15 13:23:43.256574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cdf8a61f-a7d9-4127-9cbf-b91f3735d164 00:24:46.699 [2024-07-15 13:23:43.256587] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:24:46.699 [2024-07-15 13:23:43.256607] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137920 00:24:46.699 [2024-07-15 13:23:43.256630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135936 00:24:46.699 [2024-07-15 13:23:43.256643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:24:46.699 [2024-07-15 13:23:43.256654] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.699 [2024-07-15 13:23:43.256675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.699 [2024-07-15 13:23:43.256685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.699 [2024-07-15 13:23:43.256696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.699 [2024-07-15 13:23:43.256706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.699 [2024-07-15 13:23:43.256717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.699 [2024-07-15 13:23:43.256729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.699 [2024-07-15 13:23:43.256741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:24:46.699 [2024-07-15 13:23:43.256752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.258891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.699 [2024-07-15 13:23:43.258924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.699 [2024-07-15 13:23:43.258939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:24:46.699 [2024-07-15 13:23:43.258951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.259081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.699 [2024-07-15 13:23:43.259108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.699 [2024-07-15 13:23:43.259122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:46.699 [2024-07-15 13:23:43.259134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.266285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.266349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.699 [2024-07-15 13:23:43.266369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.266381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.266472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.266497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.699 [2024-07-15 13:23:43.266509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.266520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.266623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.266642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.699 [2024-07-15 13:23:43.266654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.266666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.266688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.266702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.699 [2024-07-15 13:23:43.266720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.266740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.283053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.283132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.699 [2024-07-15 13:23:43.283173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.283186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.293430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.293506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.699 [2024-07-15 13:23:43.293545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.293557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.293636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.293652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.699 [2024-07-15 13:23:43.293665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.293676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.293720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.293734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.699 [2024-07-15 13:23:43.293746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.699 [2024-07-15 13:23:43.293758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.699 [2024-07-15 13:23:43.293872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.699 [2024-07-15 13:23:43.293891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.699 [2024-07-15 13:23:43.293904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.700 [2024-07-15 13:23:43.293915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.700 [2024-07-15 13:23:43.293959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.700 [2024-07-15 13:23:43.293977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:46.700 [2024-07-15 13:23:43.293990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.700 [2024-07-15 13:23:43.294001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.700 [2024-07-15 13:23:43.294078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.700 [2024-07-15 13:23:43.294094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.700 [2024-07-15 13:23:43.294118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.700 [2024-07-15 13:23:43.294129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.700 [2024-07-15 13:23:43.294205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.700 [2024-07-15 13:23:43.294224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.700 [2024-07-15 13:23:43.294237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.700 [2024-07-15 13:23:43.294248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.700 [2024-07-15 13:23:43.294399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.887 ms, result 0 00:24:46.958 00:24:46.958 00:24:46.958 13:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:49.484 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:49.484 13:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.484 [2024-07-15 13:23:45.787610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:49.484 [2024-07-15 13:23:45.787787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94023 ] 00:24:49.484 [2024-07-15 13:23:45.933690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.484 [2024-07-15 13:23:46.038175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.484 [2024-07-15 13:23:46.166727] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.484 [2024-07-15 13:23:46.166817] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.744 [2024-07-15 13:23:46.322245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.322329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.744 [2024-07-15 13:23:46.322353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:49.744 [2024-07-15 13:23:46.322367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.322451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.322472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.744 [2024-07-15 13:23:46.322486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:49.744 [2024-07-15 13:23:46.322504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.322538] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.744 [2024-07-15 13:23:46.322887] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.744 [2024-07-15 13:23:46.322915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.322938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.744 [2024-07-15 13:23:46.322953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:24:49.744 [2024-07-15 13:23:46.322975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.324931] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.744 [2024-07-15 13:23:46.328016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.328062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.744 [2024-07-15 13:23:46.328088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:24:49.744 [2024-07-15 13:23:46.328102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.328198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.328221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.744 [2024-07-15 13:23:46.328251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:49.744 [2024-07-15 13:23:46.328264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.337058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.337128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.744 [2024-07-15 13:23:46.337204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:24:49.744 [2024-07-15 13:23:46.337218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.337376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.337397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.744 [2024-07-15 13:23:46.337412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:24:49.744 [2024-07-15 13:23:46.337424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.337537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.337563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.744 [2024-07-15 13:23:46.337582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:49.744 [2024-07-15 13:23:46.337595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.337634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.744 [2024-07-15 13:23:46.339783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.339834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.744 [2024-07-15 13:23:46.339852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:24:49.744 [2024-07-15 13:23:46.339865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.339920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.339952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.744 [2024-07-15 13:23:46.339966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:49.744 [2024-07-15 13:23:46.339978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.340036] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.744 [2024-07-15 13:23:46.340082] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.744 [2024-07-15 13:23:46.340160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.744 [2024-07-15 13:23:46.340191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:49.744 [2024-07-15 13:23:46.340328] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.744 [2024-07-15 13:23:46.340358] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.744 [2024-07-15 13:23:46.340405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:49.744 [2024-07-15 13:23:46.340422] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.744 [2024-07-15 13:23:46.340448] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.744 [2024-07-15 13:23:46.340461] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.744 [2024-07-15 13:23:46.340481] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.744 [2024-07-15 13:23:46.340493] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.744 [2024-07-15 13:23:46.340504] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.744 [2024-07-15 13:23:46.340518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.340531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.744 [2024-07-15 13:23:46.340557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:24:49.744 [2024-07-15 13:23:46.340576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.340724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.744 [2024-07-15 13:23:46.340751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.744 [2024-07-15 13:23:46.340773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:49.744 [2024-07-15 13:23:46.340786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.744 [2024-07-15 13:23:46.340905] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.744 [2024-07-15 13:23:46.340922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.745 [2024-07-15 13:23:46.340947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.745 [2024-07-15 13:23:46.340972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.340985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.745 [2024-07-15 13:23:46.340997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.745 [2024-07-15 13:23:46.341033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.745 [2024-07-15 13:23:46.341056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.745 [2024-07-15 13:23:46.341071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.745 [2024-07-15 13:23:46.341084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.745 [2024-07-15 13:23:46.341095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.745 [2024-07-15 13:23:46.341107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.745 [2024-07-15 13:23:46.341118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.745 [2024-07-15 13:23:46.341141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.745 [2024-07-15 13:23:46.341196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.745 [2024-07-15 13:23:46.341230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.745 [2024-07-15 13:23:46.341264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.745 [2024-07-15 13:23:46.341307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.745 [2024-07-15 13:23:46.341342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.745 [2024-07-15 13:23:46.341365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.745 [2024-07-15 13:23:46.341376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.745 [2024-07-15 13:23:46.341387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.745 [2024-07-15 13:23:46.341398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.745 [2024-07-15 13:23:46.341409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.745 [2024-07-15 13:23:46.341420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.745 [2024-07-15 13:23:46.341443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.745 [2024-07-15 13:23:46.341456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341473] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.745 [2024-07-15 13:23:46.341486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.745 [2024-07-15 13:23:46.341508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.745 [2024-07-15 13:23:46.341533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.745 [2024-07-15 13:23:46.341544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.745 [2024-07-15 13:23:46.341557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.745 [2024-07-15 13:23:46.341569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.745 [2024-07-15 13:23:46.341580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.745 [2024-07-15 13:23:46.341592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.745 [2024-07-15 13:23:46.341605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.745 [2024-07-15 13:23:46.341620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.745 [2024-07-15 13:23:46.341645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.745 [2024-07-15 13:23:46.341658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.745 [2024-07-15 13:23:46.341670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.745 [2024-07-15 13:23:46.341686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.745 [2024-07-15 13:23:46.341699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.745 [2024-07-15 13:23:46.341712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.745 [2024-07-15 13:23:46.341723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.745 [2024-07-15 13:23:46.341735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.745 [2024-07-15 13:23:46.341747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.745 [2024-07-15 13:23:46.341808] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.745 [2024-07-15 13:23:46.341830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.745 [2024-07-15 13:23:46.341857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.745 [2024-07-15 13:23:46.341881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.745 [2024-07-15 13:23:46.341894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.745 [2024-07-15 13:23:46.341912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.745 [2024-07-15 13:23:46.341926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.745 [2024-07-15 13:23:46.341942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:24:49.745 [2024-07-15 13:23:46.341955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.745 [2024-07-15 13:23:46.367793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.745 [2024-07-15 13:23:46.368167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.745 [2024-07-15 13:23:46.368355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.732 ms 00:24:49.745 [2024-07-15 13:23:46.368548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.745 [2024-07-15 13:23:46.368821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.745 [2024-07-15 13:23:46.368942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.745 [2024-07-15 13:23:46.369173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:24:49.745 [2024-07-15 13:23:46.369364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.745 [2024-07-15 13:23:46.383045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.745 [2024-07-15 13:23:46.383322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.745 [2024-07-15 13:23:46.383451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.478 ms 00:24:49.745 [2024-07-15 13:23:46.383507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.383616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.383738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.746 [2024-07-15 13:23:46.383795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.746 [2024-07-15 13:23:46.383835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.384522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.384662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.746 [2024-07-15 13:23:46.384775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:24:49.746 [2024-07-15 13:23:46.384826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.385036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.385101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.746 [2024-07-15 13:23:46.385220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:49.746 [2024-07-15 13:23:46.385380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.393254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.393511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.746 [2024-07-15 13:23:46.393647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.791 ms 00:24:49.746 [2024-07-15 13:23:46.393701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.396985] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:49.746 [2024-07-15 13:23:46.397216] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.746 [2024-07-15 13:23:46.397365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.397481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.746 [2024-07-15 13:23:46.397535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:24:49.746 [2024-07-15 13:23:46.397635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.413690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.414007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.746 [2024-07-15 13:23:46.414169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.871 ms 00:24:49.746 [2024-07-15 13:23:46.414207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.417270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.417315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.746 [2024-07-15 13:23:46.417334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:24:49.746 [2024-07-15 13:23:46.417346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.419055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.419097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.746 [2024-07-15 13:23:46.419115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:24:49.746 [2024-07-15 13:23:46.419127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.419733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.419776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.746 [2024-07-15 13:23:46.419794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:24:49.746 [2024-07-15 13:23:46.419821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.443609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.443691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.746 [2024-07-15 13:23:46.443716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.758 ms 00:24:49.746 [2024-07-15 13:23:46.443730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.452715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.746 [2024-07-15 13:23:46.457115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.457178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.746 [2024-07-15 13:23:46.457209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.289 ms 00:24:49.746 [2024-07-15 13:23:46.457229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.457387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.457418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.746 [2024-07-15 13:23:46.457437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:49.746 [2024-07-15 13:23:46.457450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.458524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.458571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.746 [2024-07-15 13:23:46.458588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:24:49.746 [2024-07-15 13:23:46.458608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.458649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.458666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.746 [2024-07-15 13:23:46.458681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.746 [2024-07-15 13:23:46.458694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.458739] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.746 [2024-07-15 13:23:46.458757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.458775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.746 [2024-07-15 13:23:46.458802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:49.746 [2024-07-15 13:23:46.458818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.463234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.463282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.746 [2024-07-15 13:23:46.463301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:24:49.746 [2024-07-15 13:23:46.463315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.463415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.746 [2024-07-15 13:23:46.463436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.746 [2024-07-15 13:23:46.463466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:49.746 [2024-07-15 13:23:46.463479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.746 [2024-07-15 13:23:46.464907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.145 ms, result 0 00:25:29.599  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (25 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 184/1024 [MB] (27 MBps) Copying: 210/1024 [MB] (26 MBps) Copying: 237/1024 [MB] (26 MBps) Copying: 262/1024 [MB] (25 MBps) Copying: 288/1024 [MB] (25 MBps) Copying: 314/1024 [MB] (26 MBps) Copying: 339/1024 [MB] (24 MBps) Copying: 366/1024 [MB] (26 MBps) Copying: 393/1024 [MB] (26 MBps) Copying: 418/1024 [MB] (25 MBps) Copying: 444/1024 [MB] (25 MBps) Copying: 470/1024 [MB] (26 MBps) Copying: 496/1024 [MB] (26 MBps) Copying: 523/1024 [MB] (26 MBps) Copying: 550/1024 [MB] (26 MBps) Copying: 575/1024 [MB] (25 MBps) Copying: 602/1024 [MB] (27 MBps) Copying: 628/1024 [MB] (25 MBps) Copying: 653/1024 [MB] (25 MBps) Copying: 679/1024 [MB] (25 MBps) Copying: 705/1024 [MB] (25 MBps) Copying: 731/1024 [MB] (25 MBps) Copying: 756/1024 [MB] (24 MBps) Copying: 781/1024 [MB] (25 MBps) Copying: 807/1024 [MB] (25 MBps) Copying: 834/1024 [MB] (27 MBps) Copying: 861/1024 [MB] (26 MBps) Copying: 888/1024 [MB] (27 MBps) Copying: 915/1024 [MB] (26 MBps) Copying: 942/1024 [MB] (26 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 995/1024 [MB] (26 MBps) Copying: 1021/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 13:24:26.168226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.599 [2024-07-15 13:24:26.168340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.599 [2024-07-15 13:24:26.168372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:29.599 [2024-07-15 13:24:26.168391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.599 [2024-07-15 13:24:26.168436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.599 [2024-07-15 13:24:26.169392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.599 [2024-07-15 13:24:26.169424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.599 [2024-07-15 13:24:26.169444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:25:29.599 [2024-07-15 13:24:26.169462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.599 [2024-07-15 13:24:26.169829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.599 [2024-07-15 13:24:26.169874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.599 [2024-07-15 13:24:26.169919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:29.599 [2024-07-15 13:24:26.169936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.599 [2024-07-15 13:24:26.174887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.599 [2024-07-15 13:24:26.174924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.599 [2024-07-15 13:24:26.174940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.920 ms 00:25:29.599 [2024-07-15 13:24:26.174953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.599 [2024-07-15 13:24:26.181635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.599 [2024-07-15 13:24:26.181708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.600 [2024-07-15 13:24:26.181725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:25:29.600 [2024-07-15 13:24:26.181745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.183729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.183775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.600 [2024-07-15 13:24:26.183793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:25:29.600 [2024-07-15 13:24:26.183806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.187842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.187898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.600 [2024-07-15 13:24:26.187917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.991 ms 00:25:29.600 [2024-07-15 13:24:26.187930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.191703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.191751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.600 [2024-07-15 13:24:26.191781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.706 ms 00:25:29.600 [2024-07-15 13:24:26.191795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.193832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.193873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:29.600 [2024-07-15 13:24:26.193890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:25:29.600 [2024-07-15 13:24:26.193903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.195290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.195326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:29.600 [2024-07-15 13:24:26.195341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:25:29.600 [2024-07-15 13:24:26.195354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.196579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.196617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.600 [2024-07-15 13:24:26.196632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:25:29.600 [2024-07-15 13:24:26.196644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.197790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.600 [2024-07-15 13:24:26.197831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.600 [2024-07-15 13:24:26.197847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:25:29.600 [2024-07-15 13:24:26.197859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.600 [2024-07-15 13:24:26.197908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.600 [2024-07-15 13:24:26.197932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:29.600 [2024-07-15 13:24:26.197948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:29.600 [2024-07-15 13:24:26.197962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.197975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.197988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.600 [2024-07-15 13:24:26.198892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.198988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.601 [2024-07-15 13:24:26.199299] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.601 [2024-07-15 13:24:26.199312] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cdf8a61f-a7d9-4127-9cbf-b91f3735d164 00:25:29.601 [2024-07-15 13:24:26.199325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:29.601 [2024-07-15 13:24:26.199337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:29.601 [2024-07-15 13:24:26.199348] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:29.601 [2024-07-15 13:24:26.199376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:29.601 [2024-07-15 13:24:26.199388] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.601 [2024-07-15 13:24:26.199407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.601 [2024-07-15 13:24:26.199418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.601 [2024-07-15 13:24:26.199429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.601 [2024-07-15 13:24:26.199441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.601 [2024-07-15 13:24:26.199453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.601 [2024-07-15 13:24:26.199465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.601 [2024-07-15 13:24:26.199478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:25:29.601 [2024-07-15 13:24:26.199491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.201629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.601 [2024-07-15 13:24:26.201661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.601 [2024-07-15 13:24:26.201676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:25:29.601 [2024-07-15 13:24:26.201697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.201830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.601 [2024-07-15 13:24:26.201846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.601 [2024-07-15 13:24:26.201859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:29.601 [2024-07-15 13:24:26.201872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.208971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.209035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.601 [2024-07-15 13:24:26.209060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.209073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.209207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.209227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.601 [2024-07-15 13:24:26.209240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.209252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.209317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.209336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.601 [2024-07-15 13:24:26.209349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.209369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.209392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.209406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.601 [2024-07-15 13:24:26.209418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.209430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.225806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.225888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.601 [2024-07-15 13:24:26.225935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.225948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.601 [2024-07-15 13:24:26.236323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.601 [2024-07-15 13:24:26.236453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.601 [2024-07-15 13:24:26.236565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.601 [2024-07-15 13:24:26.236740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.601 [2024-07-15 13:24:26.236845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.236909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.236938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.601 [2024-07-15 13:24:26.236951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.236964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.237025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.601 [2024-07-15 13:24:26.237042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.601 [2024-07-15 13:24:26.237055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.601 [2024-07-15 13:24:26.237067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.601 [2024-07-15 13:24:26.237253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.991 ms, result 0 00:25:29.859 00:25:29.859 00:25:29.859 13:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:32.428 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:32.428 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:32.428 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:32.428 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.428 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:32.428 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:32.428 Process with pid 92198 is not found 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92198 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 92198 ']' 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 92198 00:25:32.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92198) - No such process 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 92198 is not found' 00:25:32.428 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:32.686 Remove shared memory files 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:32.686 ************************************ 00:25:32.686 END TEST ftl_dirty_shutdown 00:25:32.686 ************************************ 00:25:32.686 00:25:32.686 real 3m40.606s 00:25:32.686 user 4m14.908s 00:25:32.686 sys 0m36.228s 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.686 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.686 13:24:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:32.686 13:24:29 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:32.686 13:24:29 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.686 13:24:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:32.686 ************************************ 00:25:32.686 START TEST ftl_upgrade_shutdown 00:25:32.686 ************************************ 00:25:32.686 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:32.944 * Looking for test storage... 00:25:32.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.944 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:32.944 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:32.944 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.944 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94515 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94515 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94515 ']' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:32.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:32.945 13:24:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.945 [2024-07-15 13:24:29.579088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:32.945 [2024-07-15 13:24:29.579886] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94515 ] 00:25:33.203 [2024-07-15 13:24:29.725962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.203 [2024-07-15 13:24:29.830495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:33.768 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:33.769 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:33.769 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:33.769 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:33.769 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:34.026 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:34.026 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:25:34.284 13:24:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:34.542 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:34.542 { 00:25:34.542 "name": "basen1", 00:25:34.542 "aliases": [ 00:25:34.542 "4acb5173-3abc-40b1-b74d-12f10be814f1" 00:25:34.542 ], 00:25:34.542 "product_name": "NVMe disk", 00:25:34.542 "block_size": 4096, 00:25:34.542 "num_blocks": 1310720, 00:25:34.542 "uuid": "4acb5173-3abc-40b1-b74d-12f10be814f1", 00:25:34.542 "assigned_rate_limits": { 00:25:34.542 "rw_ios_per_sec": 0, 00:25:34.542 "rw_mbytes_per_sec": 0, 00:25:34.542 "r_mbytes_per_sec": 0, 00:25:34.542 "w_mbytes_per_sec": 0 00:25:34.542 }, 00:25:34.542 "claimed": true, 00:25:34.542 "claim_type": "read_many_write_one", 00:25:34.542 "zoned": false, 00:25:34.542 "supported_io_types": { 00:25:34.542 "read": true, 00:25:34.542 "write": true, 00:25:34.542 "unmap": true, 00:25:34.542 "write_zeroes": true, 00:25:34.542 "flush": true, 00:25:34.542 "reset": true, 00:25:34.542 "compare": true, 00:25:34.542 "compare_and_write": false, 00:25:34.542 "abort": true, 00:25:34.542 "nvme_admin": true, 00:25:34.542 "nvme_io": true 00:25:34.542 }, 00:25:34.542 "driver_specific": { 00:25:34.542 "nvme": [ 00:25:34.542 { 00:25:34.542 "pci_address": "0000:00:11.0", 00:25:34.542 "trid": { 00:25:34.542 "trtype": "PCIe", 00:25:34.542 "traddr": "0000:00:11.0" 00:25:34.542 }, 00:25:34.542 "ctrlr_data": { 00:25:34.542 "cntlid": 0, 00:25:34.542 "vendor_id": "0x1b36", 00:25:34.542 "model_number": "QEMU NVMe Ctrl", 00:25:34.542 "serial_number": "12341", 00:25:34.542 "firmware_revision": "8.0.0", 00:25:34.542 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:34.542 "oacs": { 00:25:34.542 "security": 0, 00:25:34.542 "format": 1, 00:25:34.542 "firmware": 0, 00:25:34.542 "ns_manage": 1 00:25:34.542 }, 00:25:34.542 "multi_ctrlr": false, 00:25:34.543 "ana_reporting": false 00:25:34.543 }, 00:25:34.543 "vs": { 00:25:34.543 "nvme_version": "1.4" 00:25:34.543 }, 00:25:34.543 "ns_data": { 00:25:34.543 "id": 1, 00:25:34.543 "can_share": false 00:25:34.543 } 00:25:34.543 } 00:25:34.543 ], 00:25:34.543 "mp_policy": "active_passive" 00:25:34.543 } 00:25:34.543 } 00:25:34.543 ]' 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:34.543 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:34.800 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9c11b478-07c2-42a8-b6c1-50dfc43cce37 00:25:34.801 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:34.801 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c11b478-07c2-42a8-b6c1-50dfc43cce37 00:25:35.058 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:35.315 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ba446a15-97a6-4ff0-aa53-722c526778c3 00:25:35.315 13:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ba446a15-97a6-4ff0-aa53-722c526778c3 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=7944a1ca-e044-41a4-94c0-f22be74cef72 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 7944a1ca-e044-41a4-94c0-f22be74cef72 ]] 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 7944a1ca-e044-41a4-94c0-f22be74cef72 5120 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=7944a1ca-e044-41a4-94c0-f22be74cef72 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7944a1ca-e044-41a4-94c0-f22be74cef72 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=7944a1ca-e044-41a4-94c0-f22be74cef72 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:25:35.572 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7944a1ca-e044-41a4-94c0-f22be74cef72 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:35.830 { 00:25:35.830 "name": "7944a1ca-e044-41a4-94c0-f22be74cef72", 00:25:35.830 "aliases": [ 00:25:35.830 "lvs/basen1p0" 00:25:35.830 ], 00:25:35.830 "product_name": "Logical Volume", 00:25:35.830 "block_size": 4096, 00:25:35.830 "num_blocks": 5242880, 00:25:35.830 "uuid": "7944a1ca-e044-41a4-94c0-f22be74cef72", 00:25:35.830 "assigned_rate_limits": { 00:25:35.830 "rw_ios_per_sec": 0, 00:25:35.830 "rw_mbytes_per_sec": 0, 00:25:35.830 "r_mbytes_per_sec": 0, 00:25:35.830 "w_mbytes_per_sec": 0 00:25:35.830 }, 00:25:35.830 "claimed": false, 00:25:35.830 "zoned": false, 00:25:35.830 "supported_io_types": { 00:25:35.830 "read": true, 00:25:35.830 "write": true, 00:25:35.830 "unmap": true, 00:25:35.830 "write_zeroes": true, 00:25:35.830 "flush": false, 00:25:35.830 "reset": true, 00:25:35.830 "compare": false, 00:25:35.830 "compare_and_write": false, 00:25:35.830 "abort": false, 00:25:35.830 "nvme_admin": false, 00:25:35.830 "nvme_io": false 00:25:35.830 }, 00:25:35.830 "driver_specific": { 00:25:35.830 "lvol": { 00:25:35.830 "lvol_store_uuid": "ba446a15-97a6-4ff0-aa53-722c526778c3", 00:25:35.830 "base_bdev": "basen1", 00:25:35.830 "thin_provision": true, 00:25:35.830 "num_allocated_clusters": 0, 00:25:35.830 "snapshot": false, 00:25:35.830 "clone": false, 00:25:35.830 "esnap_clone": false 00:25:35.830 } 00:25:35.830 } 00:25:35.830 } 00:25:35.830 ]' 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:35.830 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:36.088 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:36.088 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:36.088 13:24:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:36.345 13:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:36.345 13:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:36.345 13:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 7944a1ca-e044-41a4-94c0-f22be74cef72 -c cachen1p0 --l2p_dram_limit 2 00:25:36.603 [2024-07-15 13:24:33.273452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.603 [2024-07-15 13:24:33.273527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:36.603 [2024-07-15 13:24:33.273554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:36.603 [2024-07-15 13:24:33.273568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.603 [2024-07-15 13:24:33.273666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.273686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:36.604 [2024-07-15 13:24:33.273703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:25:36.604 [2024-07-15 13:24:33.273719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.273765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:36.604 [2024-07-15 13:24:33.274179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:36.604 [2024-07-15 13:24:33.274228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.274246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:36.604 [2024-07-15 13:24:33.274262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.464 ms 00:25:36.604 [2024-07-15 13:24:33.274275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.274427] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 64caf7b5-2004-41b7-a408-7289ffbc411e 00:25:36.604 [2024-07-15 13:24:33.276238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.276284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:36.604 [2024-07-15 13:24:33.276302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:36.604 [2024-07-15 13:24:33.276322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.285934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.286005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:36.604 [2024-07-15 13:24:33.286027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.545 ms 00:25:36.604 [2024-07-15 13:24:33.286043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.286165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.286202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:36.604 [2024-07-15 13:24:33.286223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:36.604 [2024-07-15 13:24:33.286237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.286348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.286371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:36.604 [2024-07-15 13:24:33.286385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:36.604 [2024-07-15 13:24:33.286400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.286437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:36.604 [2024-07-15 13:24:33.288710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.288757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:36.604 [2024-07-15 13:24:33.288788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.279 ms 00:25:36.604 [2024-07-15 13:24:33.288801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.288846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.288862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:36.604 [2024-07-15 13:24:33.288888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:36.604 [2024-07-15 13:24:33.288900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.288933] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:36.604 [2024-07-15 13:24:33.289110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:36.604 [2024-07-15 13:24:33.289134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:36.604 [2024-07-15 13:24:33.289362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:36.604 [2024-07-15 13:24:33.289439] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:36.604 [2024-07-15 13:24:33.289503] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:36.604 [2024-07-15 13:24:33.289652] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:36.604 [2024-07-15 13:24:33.289701] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:36.604 [2024-07-15 13:24:33.289764] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:36.604 [2024-07-15 13:24:33.289802] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:36.604 [2024-07-15 13:24:33.289929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.289957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:36.604 [2024-07-15 13:24:33.289974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.999 ms 00:25:36.604 [2024-07-15 13:24:33.289987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.290105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.604 [2024-07-15 13:24:33.290123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:36.604 [2024-07-15 13:24:33.290142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:25:36.604 [2024-07-15 13:24:33.290170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.604 [2024-07-15 13:24:33.290321] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:36.604 [2024-07-15 13:24:33.290343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:36.604 [2024-07-15 13:24:33.290359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:36.604 [2024-07-15 13:24:33.290398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:36.604 [2024-07-15 13:24:33.290424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:36.604 [2024-07-15 13:24:33.290437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:36.604 [2024-07-15 13:24:33.290449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:36.604 [2024-07-15 13:24:33.290474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:36.604 [2024-07-15 13:24:33.290487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:36.604 [2024-07-15 13:24:33.290516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:36.604 [2024-07-15 13:24:33.290528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:36.604 [2024-07-15 13:24:33.290552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:36.604 [2024-07-15 13:24:33.290566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:36.604 [2024-07-15 13:24:33.290591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:36.604 [2024-07-15 13:24:33.290603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:36.604 [2024-07-15 13:24:33.290636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:36.604 [2024-07-15 13:24:33.290650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:36.604 [2024-07-15 13:24:33.290675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:36.604 [2024-07-15 13:24:33.290686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:36.604 [2024-07-15 13:24:33.290713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:36.604 [2024-07-15 13:24:33.290731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:36.604 [2024-07-15 13:24:33.290759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:36.604 [2024-07-15 13:24:33.290771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:36.604 [2024-07-15 13:24:33.290797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:36.604 [2024-07-15 13:24:33.290838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:36.604 [2024-07-15 13:24:33.290876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:36.604 [2024-07-15 13:24:33.290890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290901] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:36.604 [2024-07-15 13:24:33.290915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:36.604 [2024-07-15 13:24:33.290927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:36.604 [2024-07-15 13:24:33.290948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:36.604 [2024-07-15 13:24:33.290960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:36.604 [2024-07-15 13:24:33.290974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:36.604 [2024-07-15 13:24:33.290986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:36.604 [2024-07-15 13:24:33.291000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:36.604 [2024-07-15 13:24:33.291012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:36.604 [2024-07-15 13:24:33.291028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:36.604 [2024-07-15 13:24:33.291046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:36.604 [2024-07-15 13:24:33.291068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.604 [2024-07-15 13:24:33.291081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:36.604 [2024-07-15 13:24:33.291096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:36.604 [2024-07-15 13:24:33.291108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:36.605 [2024-07-15 13:24:33.291163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:36.605 [2024-07-15 13:24:33.291182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:36.605 [2024-07-15 13:24:33.291195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:36.605 [2024-07-15 13:24:33.291213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:36.605 [2024-07-15 13:24:33.291307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:36.605 [2024-07-15 13:24:33.291323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.605 [2024-07-15 13:24:33.291352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:36.605 [2024-07-15 13:24:33.291365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:36.605 [2024-07-15 13:24:33.291381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:36.605 [2024-07-15 13:24:33.291395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.605 [2024-07-15 13:24:33.291410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:36.605 [2024-07-15 13:24:33.291423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.145 ms 00:25:36.605 [2024-07-15 13:24:33.291439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.605 [2024-07-15 13:24:33.291504] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:36.605 [2024-07-15 13:24:33.291525] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:40.004 [2024-07-15 13:24:36.025986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.026315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:40.004 [2024-07-15 13:24:36.026463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2734.492 ms 00:25:40.004 [2024-07-15 13:24:36.026521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.040609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.040887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:40.004 [2024-07-15 13:24:36.041017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.866 ms 00:25:40.004 [2024-07-15 13:24:36.041142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.041333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.041401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:40.004 [2024-07-15 13:24:36.041528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:40.004 [2024-07-15 13:24:36.041585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.054794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.055062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:40.004 [2024-07-15 13:24:36.055214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.053 ms 00:25:40.004 [2024-07-15 13:24:36.055349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.055464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.055552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:40.004 [2024-07-15 13:24:36.055688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:40.004 [2024-07-15 13:24:36.055746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.056493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.056630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:40.004 [2024-07-15 13:24:36.056738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.575 ms 00:25:40.004 [2024-07-15 13:24:36.056854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.056958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.057031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:40.004 [2024-07-15 13:24:36.057137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:40.004 [2024-07-15 13:24:36.057311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-15 13:24:36.066597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-15 13:24:36.066839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:40.004 [2024-07-15 13:24:36.066982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.210 ms 00:25:40.004 [2024-07-15 13:24:36.067039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.077645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:40.005 [2024-07-15 13:24:36.079235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.079375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:40.005 [2024-07-15 13:24:36.079504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.920 ms 00:25:40.005 [2024-07-15 13:24:36.079528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.107513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.107602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:40.005 [2024-07-15 13:24:36.107655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.911 ms 00:25:40.005 [2024-07-15 13:24:36.107673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.107829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.107855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:40.005 [2024-07-15 13:24:36.107877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:25:40.005 [2024-07-15 13:24:36.107893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.111788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.111843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:40.005 [2024-07-15 13:24:36.111871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.822 ms 00:25:40.005 [2024-07-15 13:24:36.111892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.115555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.115609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:40.005 [2024-07-15 13:24:36.115635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.598 ms 00:25:40.005 [2024-07-15 13:24:36.115650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.116157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.116188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:40.005 [2024-07-15 13:24:36.116210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:25:40.005 [2024-07-15 13:24:36.116225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.156330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.156420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:40.005 [2024-07-15 13:24:36.156452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.016 ms 00:25:40.005 [2024-07-15 13:24:36.156474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.162496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.162566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:40.005 [2024-07-15 13:24:36.162601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.924 ms 00:25:40.005 [2024-07-15 13:24:36.162618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.167413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.167476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:40.005 [2024-07-15 13:24:36.167503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.724 ms 00:25:40.005 [2024-07-15 13:24:36.167519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.171929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.171988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:40.005 [2024-07-15 13:24:36.172015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.341 ms 00:25:40.005 [2024-07-15 13:24:36.172031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.172116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.172139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:40.005 [2024-07-15 13:24:36.172189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:40.005 [2024-07-15 13:24:36.172205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.172307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.005 [2024-07-15 13:24:36.172328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:40.005 [2024-07-15 13:24:36.172347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:40.005 [2024-07-15 13:24:36.172379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.005 [2024-07-15 13:24:36.173969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2899.910 ms, result 0 00:25:40.005 { 00:25:40.005 "name": "ftl", 00:25:40.005 "uuid": "64caf7b5-2004-41b7-a408-7289ffbc411e" 00:25:40.005 } 00:25:40.005 13:24:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:40.005 [2024-07-15 13:24:36.401896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.005 13:24:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:40.005 13:24:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:40.263 [2024-07-15 13:24:36.927026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:40.264 13:24:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:40.521 [2024-07-15 13:24:37.207613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:40.521 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:41.085 Fill FTL, iteration 1 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94632 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94632 /var/tmp/spdk.tgt.sock 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94632 ']' 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:41.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:41.086 13:24:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 [2024-07-15 13:24:37.683330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:41.086 [2024-07-15 13:24:37.683493] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94632 ] 00:25:41.344 [2024-07-15 13:24:37.828286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.344 [2024-07-15 13:24:37.931008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.910 13:24:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.910 13:24:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:41.910 13:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:42.476 ftln1 00:25:42.476 13:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:42.476 13:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94632 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94632 ']' 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94632 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:42.476 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94632 00:25:42.734 killing process with pid 94632 00:25:42.734 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:42.734 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:42.734 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94632' 00:25:42.734 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94632 00:25:42.734 13:24:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94632 00:25:42.991 13:24:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:42.991 13:24:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:43.250 [2024-07-15 13:24:39.752946] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:43.250 [2024-07-15 13:24:39.753125] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94671 ] 00:25:43.250 [2024-07-15 13:24:39.895428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.507 [2024-07-15 13:24:39.993312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.872  Copying: 212/1024 [MB] (212 MBps) Copying: 424/1024 [MB] (212 MBps) Copying: 635/1024 [MB] (211 MBps) Copying: 844/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:25:48.872 00:25:48.872 Calculate MD5 checksum, iteration 1 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:48.872 13:24:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:48.872 [2024-07-15 13:24:45.477703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:48.872 [2024-07-15 13:24:45.478004] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94730 ] 00:25:49.129 [2024-07-15 13:24:45.624254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.129 [2024-07-15 13:24:45.722576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.694  Copying: 512/1024 [MB] (512 MBps) Copying: 972/1024 [MB] (460 MBps) Copying: 1024/1024 [MB] (average 476 MBps) 00:25:51.694 00:25:51.694 13:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:51.694 13:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:54.222 Fill FTL, iteration 2 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4ed88b2c3dcbbab165b9cd8aca0245e8 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:54.222 13:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:54.222 [2024-07-15 13:24:50.726018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:54.222 [2024-07-15 13:24:50.726247] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94787 ] 00:25:54.222 [2024-07-15 13:24:50.876020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.480 [2024-07-15 13:24:50.974251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.576  Copying: 208/1024 [MB] (208 MBps) Copying: 420/1024 [MB] (212 MBps) Copying: 633/1024 [MB] (213 MBps) Copying: 848/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:25:59.576 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:59.833 Calculate MD5 checksum, iteration 2 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:59.833 13:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:59.833 [2024-07-15 13:24:56.425197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:59.833 [2024-07-15 13:24:56.425377] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94844 ] 00:26:00.091 [2024-07-15 13:24:56.574403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.091 [2024-07-15 13:24:56.672671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.327  Copying: 515/1024 [MB] (515 MBps) Copying: 1008/1024 [MB] (493 MBps) Copying: 1024/1024 [MB] (average 503 MBps) 00:26:03.327 00:26:03.327 13:24:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:03.327 13:24:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4dec7460871601e9a0b4fb61c50a6343 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:05.856 [2024-07-15 13:25:02.247664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.856 [2024-07-15 13:25:02.247747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:05.856 [2024-07-15 13:25:02.247771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:05.856 [2024-07-15 13:25:02.247783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.856 [2024-07-15 13:25:02.247822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.856 [2024-07-15 13:25:02.247846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:05.856 [2024-07-15 13:25:02.247859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.856 [2024-07-15 13:25:02.247871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.856 [2024-07-15 13:25:02.247902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.856 [2024-07-15 13:25:02.247916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:05.856 [2024-07-15 13:25:02.247940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:05.856 [2024-07-15 13:25:02.247953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.856 [2024-07-15 13:25:02.248041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.363 ms, result 0 00:26:05.856 true 00:26:05.856 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:05.856 { 00:26:05.856 "name": "ftl", 00:26:05.856 "properties": [ 00:26:05.856 { 00:26:05.856 "name": "superblock_version", 00:26:05.856 "value": 5, 00:26:05.856 "read-only": true 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "name": "base_device", 00:26:05.856 "bands": [ 00:26:05.856 { 00:26:05.856 "id": 0, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 1, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 2, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 3, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 4, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 5, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 6, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 7, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 8, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 9, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 10, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 11, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 12, 00:26:05.856 "state": "FREE", 00:26:05.856 "validity": 0.0 00:26:05.856 }, 00:26:05.856 { 00:26:05.856 "id": 13, 00:26:05.857 "state": "FREE", 00:26:05.857 "validity": 0.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 14, 00:26:05.857 "state": "FREE", 00:26:05.857 "validity": 0.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 15, 00:26:05.857 "state": "FREE", 00:26:05.857 "validity": 0.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 16, 00:26:05.857 "state": "FREE", 00:26:05.857 "validity": 0.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 17, 00:26:05.857 "state": "FREE", 00:26:05.857 "validity": 0.0 00:26:05.857 } 00:26:05.857 ], 00:26:05.857 "read-only": true 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "name": "cache_device", 00:26:05.857 "type": "bdev", 00:26:05.857 "chunks": [ 00:26:05.857 { 00:26:05.857 "id": 0, 00:26:05.857 "state": "INACTIVE", 00:26:05.857 "utilization": 0.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 1, 00:26:05.857 "state": "CLOSED", 00:26:05.857 "utilization": 1.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 2, 00:26:05.857 "state": "CLOSED", 00:26:05.857 "utilization": 1.0 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 3, 00:26:05.857 "state": "OPEN", 00:26:05.857 "utilization": 0.001953125 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "id": 4, 00:26:05.857 "state": "OPEN", 00:26:05.857 "utilization": 0.0 00:26:05.857 } 00:26:05.857 ], 00:26:05.857 "read-only": true 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "name": "verbose_mode", 00:26:05.857 "value": true, 00:26:05.857 "unit": "", 00:26:05.857 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:05.857 }, 00:26:05.857 { 00:26:05.857 "name": "prep_upgrade_on_shutdown", 00:26:05.857 "value": false, 00:26:05.857 "unit": "", 00:26:05.857 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:05.857 } 00:26:05.857 ] 00:26:05.857 } 00:26:05.857 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:06.115 [2024-07-15 13:25:02.739043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.115 [2024-07-15 13:25:02.739420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:06.115 [2024-07-15 13:25:02.739550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:06.115 [2024-07-15 13:25:02.739601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.115 [2024-07-15 13:25:02.739761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.115 [2024-07-15 13:25:02.739813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:06.115 [2024-07-15 13:25:02.739853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:06.115 [2024-07-15 13:25:02.739968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.115 [2024-07-15 13:25:02.740044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.115 [2024-07-15 13:25:02.740107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:06.115 [2024-07-15 13:25:02.740224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:06.115 [2024-07-15 13:25:02.740335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.115 [2024-07-15 13:25:02.740471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.424 ms, result 0 00:26:06.115 true 00:26:06.115 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:06.115 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:06.115 13:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:06.373 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:06.373 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:06.373 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:06.630 [2024-07-15 13:25:03.255606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.630 [2024-07-15 13:25:03.255948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:06.630 [2024-07-15 13:25:03.255993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:06.630 [2024-07-15 13:25:03.256007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.630 [2024-07-15 13:25:03.256056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.630 [2024-07-15 13:25:03.256073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:06.630 [2024-07-15 13:25:03.256094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:06.630 [2024-07-15 13:25:03.256106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.630 [2024-07-15 13:25:03.256135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.630 [2024-07-15 13:25:03.256170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:06.630 [2024-07-15 13:25:03.256186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:06.630 [2024-07-15 13:25:03.256197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.630 [2024-07-15 13:25:03.256281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.665 ms, result 0 00:26:06.630 true 00:26:06.630 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:06.888 { 00:26:06.888 "name": "ftl", 00:26:06.888 "properties": [ 00:26:06.888 { 00:26:06.888 "name": "superblock_version", 00:26:06.888 "value": 5, 00:26:06.888 "read-only": true 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "name": "base_device", 00:26:06.888 "bands": [ 00:26:06.888 { 00:26:06.888 "id": 0, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 1, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 2, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 3, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 4, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 5, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 6, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 7, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 8, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 9, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 10, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 11, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 12, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 13, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 14, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 15, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 16, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 17, 00:26:06.888 "state": "FREE", 00:26:06.888 "validity": 0.0 00:26:06.888 } 00:26:06.888 ], 00:26:06.888 "read-only": true 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "name": "cache_device", 00:26:06.888 "type": "bdev", 00:26:06.888 "chunks": [ 00:26:06.888 { 00:26:06.888 "id": 0, 00:26:06.888 "state": "INACTIVE", 00:26:06.888 "utilization": 0.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 1, 00:26:06.888 "state": "CLOSED", 00:26:06.888 "utilization": 1.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 2, 00:26:06.888 "state": "CLOSED", 00:26:06.888 "utilization": 1.0 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 3, 00:26:06.888 "state": "OPEN", 00:26:06.888 "utilization": 0.001953125 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "id": 4, 00:26:06.888 "state": "OPEN", 00:26:06.888 "utilization": 0.0 00:26:06.888 } 00:26:06.888 ], 00:26:06.888 "read-only": true 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "name": "verbose_mode", 00:26:06.888 "value": true, 00:26:06.888 "unit": "", 00:26:06.888 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:06.888 }, 00:26:06.888 { 00:26:06.888 "name": "prep_upgrade_on_shutdown", 00:26:06.888 "value": true, 00:26:06.888 "unit": "", 00:26:06.888 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:06.888 } 00:26:06.888 ] 00:26:06.888 } 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94515 ]] 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94515 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94515 ']' 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94515 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94515 00:26:06.888 killing process with pid 94515 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94515' 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94515 00:26:06.888 13:25:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94515 00:26:07.147 [2024-07-15 13:25:03.752398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:07.147 [2024-07-15 13:25:03.759655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.147 [2024-07-15 13:25:03.759724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:07.147 [2024-07-15 13:25:03.759747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:07.147 [2024-07-15 13:25:03.759759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.147 [2024-07-15 13:25:03.759794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:07.147 [2024-07-15 13:25:03.760618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.147 [2024-07-15 13:25:03.760643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:07.147 [2024-07-15 13:25:03.760658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.803 ms 00:26:07.147 [2024-07-15 13:25:03.760669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.227117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.227225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:17.159 [2024-07-15 13:25:12.227276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8466.445 ms 00:26:17.159 [2024-07-15 13:25:12.227314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.229128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.229218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:17.159 [2024-07-15 13:25:12.229253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.747 ms 00:26:17.159 [2024-07-15 13:25:12.229279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.230925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.230978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:17.159 [2024-07-15 13:25:12.231008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.571 ms 00:26:17.159 [2024-07-15 13:25:12.231033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.233318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.233384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:17.159 [2024-07-15 13:25:12.233415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.130 ms 00:26:17.159 [2024-07-15 13:25:12.233440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.235858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.235917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:17.159 [2024-07-15 13:25:12.235951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.346 ms 00:26:17.159 [2024-07-15 13:25:12.235976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.236130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.236200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:17.159 [2024-07-15 13:25:12.236242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:26:17.159 [2024-07-15 13:25:12.236267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.237751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.237809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:17.159 [2024-07-15 13:25:12.237841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.423 ms 00:26:17.159 [2024-07-15 13:25:12.237865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.239439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.239493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:17.159 [2024-07-15 13:25:12.239524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.505 ms 00:26:17.159 [2024-07-15 13:25:12.239549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.240928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.240985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:17.159 [2024-07-15 13:25:12.241016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.307 ms 00:26:17.159 [2024-07-15 13:25:12.241041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.242467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.242520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:17.159 [2024-07-15 13:25:12.242551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:26:17.159 [2024-07-15 13:25:12.242575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.242644] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:17.159 [2024-07-15 13:25:12.242685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:17.159 [2024-07-15 13:25:12.242717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:17.159 [2024-07-15 13:25:12.242743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:17.159 [2024-07-15 13:25:12.242769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.242999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.159 [2024-07-15 13:25:12.243184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:17.159 [2024-07-15 13:25:12.243211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 64caf7b5-2004-41b7-a408-7289ffbc411e 00:26:17.159 [2024-07-15 13:25:12.243237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:17.159 [2024-07-15 13:25:12.243260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:17.159 [2024-07-15 13:25:12.243284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:17.159 [2024-07-15 13:25:12.243308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:17.159 [2024-07-15 13:25:12.243331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:17.159 [2024-07-15 13:25:12.243356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:17.159 [2024-07-15 13:25:12.243391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:17.159 [2024-07-15 13:25:12.243414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:17.159 [2024-07-15 13:25:12.243435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:17.159 [2024-07-15 13:25:12.243463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.243487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:17.159 [2024-07-15 13:25:12.243513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.820 ms 00:26:17.159 [2024-07-15 13:25:12.243538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.159 [2024-07-15 13:25:12.246584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.159 [2024-07-15 13:25:12.246782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:17.159 [2024-07-15 13:25:12.246987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.980 ms 00:26:17.159 [2024-07-15 13:25:12.247218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.247504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.160 [2024-07-15 13:25:12.247595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:17.160 [2024-07-15 13:25:12.247777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.123 ms 00:26:17.160 [2024-07-15 13:25:12.247873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.257159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.257496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:17.160 [2024-07-15 13:25:12.257674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.257902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.258124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.258219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:17.160 [2024-07-15 13:25:12.258252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.258278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.258488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.258526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:17.160 [2024-07-15 13:25:12.258558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.258584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.258662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.258693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:17.160 [2024-07-15 13:25:12.258720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.258743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.279356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.279459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:17.160 [2024-07-15 13:25:12.279495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.279535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.292316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.292401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:17.160 [2024-07-15 13:25:12.292436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.292458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.292625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.292662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:17.160 [2024-07-15 13:25:12.292689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.292714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.292829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.292875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:17.160 [2024-07-15 13:25:12.292902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.292925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.293085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.293130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:17.160 [2024-07-15 13:25:12.293190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.293218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.293315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.293368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:17.160 [2024-07-15 13:25:12.293394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.293418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.293513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.293556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:17.160 [2024-07-15 13:25:12.293583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.293607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.293709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:17.160 [2024-07-15 13:25:12.293752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:17.160 [2024-07-15 13:25:12.293778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:17.160 [2024-07-15 13:25:12.293801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.160 [2024-07-15 13:25:12.294102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8534.387 ms, result 0 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95034 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:18.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95034 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95034 ']' 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:18.098 13:25:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:18.098 [2024-07-15 13:25:14.653054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:18.098 [2024-07-15 13:25:14.653289] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95034 ] 00:26:18.098 [2024-07-15 13:25:14.804976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.357 [2024-07-15 13:25:14.909904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.615 [2024-07-15 13:25:15.260623] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:18.615 [2024-07-15 13:25:15.260711] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:18.874 [2024-07-15 13:25:15.400701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.400776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:18.874 [2024-07-15 13:25:15.400800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:18.874 [2024-07-15 13:25:15.400812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.400943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.400973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:18.874 [2024-07-15 13:25:15.401012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:26:18.874 [2024-07-15 13:25:15.401025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.401075] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:18.874 [2024-07-15 13:25:15.401532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:18.874 [2024-07-15 13:25:15.401571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.401585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:18.874 [2024-07-15 13:25:15.401597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:26:18.874 [2024-07-15 13:25:15.401609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.403772] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:18.874 [2024-07-15 13:25:15.406741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.406807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:18.874 [2024-07-15 13:25:15.406832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.971 ms 00:26:18.874 [2024-07-15 13:25:15.406845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.406946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.406967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:18.874 [2024-07-15 13:25:15.406980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:18.874 [2024-07-15 13:25:15.406992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.415706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.415771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:18.874 [2024-07-15 13:25:15.415796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.618 ms 00:26:18.874 [2024-07-15 13:25:15.415811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.415900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.415926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:18.874 [2024-07-15 13:25:15.415940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:18.874 [2024-07-15 13:25:15.415957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.416066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.416085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:18.874 [2024-07-15 13:25:15.416109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:18.874 [2024-07-15 13:25:15.416121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.416191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:18.874 [2024-07-15 13:25:15.418367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.418409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:18.874 [2024-07-15 13:25:15.418426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.189 ms 00:26:18.874 [2024-07-15 13:25:15.418455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.418508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.874 [2024-07-15 13:25:15.418537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:18.874 [2024-07-15 13:25:15.418550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:18.874 [2024-07-15 13:25:15.418561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.874 [2024-07-15 13:25:15.418624] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:18.874 [2024-07-15 13:25:15.418658] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:18.874 [2024-07-15 13:25:15.418702] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:18.875 [2024-07-15 13:25:15.418726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:18.875 [2024-07-15 13:25:15.418832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:18.875 [2024-07-15 13:25:15.418848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:18.875 [2024-07-15 13:25:15.418863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:18.875 [2024-07-15 13:25:15.418878] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:18.875 [2024-07-15 13:25:15.418892] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:18.875 [2024-07-15 13:25:15.418904] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:18.875 [2024-07-15 13:25:15.418916] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:18.875 [2024-07-15 13:25:15.418932] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:18.875 [2024-07-15 13:25:15.418944] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:18.875 [2024-07-15 13:25:15.418960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.875 [2024-07-15 13:25:15.418972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:18.875 [2024-07-15 13:25:15.418985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:26:18.875 [2024-07-15 13:25:15.418996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.875 [2024-07-15 13:25:15.419096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.875 [2024-07-15 13:25:15.419113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:18.875 [2024-07-15 13:25:15.419125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:26:18.875 [2024-07-15 13:25:15.419137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.875 [2024-07-15 13:25:15.419280] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:18.875 [2024-07-15 13:25:15.419304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:18.875 [2024-07-15 13:25:15.419318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:18.875 [2024-07-15 13:25:15.419369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:18.875 [2024-07-15 13:25:15.419392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:18.875 [2024-07-15 13:25:15.419403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:18.875 [2024-07-15 13:25:15.419413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:18.875 [2024-07-15 13:25:15.419433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:18.875 [2024-07-15 13:25:15.419444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:18.875 [2024-07-15 13:25:15.419465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:18.875 [2024-07-15 13:25:15.419474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:18.875 [2024-07-15 13:25:15.419505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:18.875 [2024-07-15 13:25:15.419523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:18.875 [2024-07-15 13:25:15.419546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:18.875 [2024-07-15 13:25:15.419592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:18.875 [2024-07-15 13:25:15.419640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:18.875 [2024-07-15 13:25:15.419672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:18.875 [2024-07-15 13:25:15.419723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:18.875 [2024-07-15 13:25:15.419757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:18.875 [2024-07-15 13:25:15.419795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:18.875 [2024-07-15 13:25:15.419834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:18.875 [2024-07-15 13:25:15.419852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419870] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:18.875 [2024-07-15 13:25:15.419885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:18.875 [2024-07-15 13:25:15.419897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:18.875 [2024-07-15 13:25:15.419921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:18.875 [2024-07-15 13:25:15.419934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:18.875 [2024-07-15 13:25:15.419950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:18.875 [2024-07-15 13:25:15.419969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:18.875 [2024-07-15 13:25:15.419985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:18.875 [2024-07-15 13:25:15.420005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:18.875 [2024-07-15 13:25:15.420023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:18.875 [2024-07-15 13:25:15.420053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:18.875 [2024-07-15 13:25:15.420074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:18.875 [2024-07-15 13:25:15.420099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:18.875 [2024-07-15 13:25:15.420134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:18.875 [2024-07-15 13:25:15.420167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:18.875 [2024-07-15 13:25:15.420182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:18.875 [2024-07-15 13:25:15.420196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:18.875 [2024-07-15 13:25:15.420303] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:18.875 [2024-07-15 13:25:15.420321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:18.875 [2024-07-15 13:25:15.420363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:18.875 [2024-07-15 13:25:15.420376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:18.875 [2024-07-15 13:25:15.420388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:18.875 [2024-07-15 13:25:15.420402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.875 [2024-07-15 13:25:15.420419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:18.875 [2024-07-15 13:25:15.420433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:26:18.875 [2024-07-15 13:25:15.420451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.875 [2024-07-15 13:25:15.420555] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:18.875 [2024-07-15 13:25:15.420582] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:22.157 [2024-07-15 13:25:18.227161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.227242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:22.158 [2024-07-15 13:25:18.227279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2806.617 ms 00:26:22.158 [2024-07-15 13:25:18.227293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.241463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.241534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:22.158 [2024-07-15 13:25:18.241557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.038 ms 00:26:22.158 [2024-07-15 13:25:18.241587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.241709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.241728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:22.158 [2024-07-15 13:25:18.241750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:22.158 [2024-07-15 13:25:18.241762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.255232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.255296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:22.158 [2024-07-15 13:25:18.255317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.391 ms 00:26:22.158 [2024-07-15 13:25:18.255330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.255412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.255442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:22.158 [2024-07-15 13:25:18.255457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:22.158 [2024-07-15 13:25:18.255469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.256080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.256115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:22.158 [2024-07-15 13:25:18.256132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:26:22.158 [2024-07-15 13:25:18.256156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.256224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.256241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:22.158 [2024-07-15 13:25:18.256260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:22.158 [2024-07-15 13:25:18.256271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.265517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.265586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:22.158 [2024-07-15 13:25:18.265606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.211 ms 00:26:22.158 [2024-07-15 13:25:18.265619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.268842] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:22.158 [2024-07-15 13:25:18.268898] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:22.158 [2024-07-15 13:25:18.268920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.268932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:22.158 [2024-07-15 13:25:18.268946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.073 ms 00:26:22.158 [2024-07-15 13:25:18.268958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.273384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.273460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:22.158 [2024-07-15 13:25:18.273491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.372 ms 00:26:22.158 [2024-07-15 13:25:18.273505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.275923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.275968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:22.158 [2024-07-15 13:25:18.275986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.335 ms 00:26:22.158 [2024-07-15 13:25:18.275998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.277678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.277721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:22.158 [2024-07-15 13:25:18.277738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.623 ms 00:26:22.158 [2024-07-15 13:25:18.277750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.278325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.278355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:22.158 [2024-07-15 13:25:18.278370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:26:22.158 [2024-07-15 13:25:18.278396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.313857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.313964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:22.158 [2024-07-15 13:25:18.313989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.429 ms 00:26:22.158 [2024-07-15 13:25:18.314003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.325776] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:22.158 [2024-07-15 13:25:18.327567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.327625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:22.158 [2024-07-15 13:25:18.327655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.439 ms 00:26:22.158 [2024-07-15 13:25:18.327674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.327856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.327887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:22.158 [2024-07-15 13:25:18.327911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:22.158 [2024-07-15 13:25:18.327933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.328056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.328089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:22.158 [2024-07-15 13:25:18.328110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:22.158 [2024-07-15 13:25:18.328129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.328251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.328277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:22.158 [2024-07-15 13:25:18.328297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:22.158 [2024-07-15 13:25:18.328314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.328383] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:22.158 [2024-07-15 13:25:18.328412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.328431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:22.158 [2024-07-15 13:25:18.328456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:22.158 [2024-07-15 13:25:18.328474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.333313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.333373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:22.158 [2024-07-15 13:25:18.333394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.786 ms 00:26:22.158 [2024-07-15 13:25:18.333407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.333504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.158 [2024-07-15 13:25:18.333524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:22.158 [2024-07-15 13:25:18.333547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:26:22.158 [2024-07-15 13:25:18.333559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.158 [2024-07-15 13:25:18.334991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2933.788 ms, result 0 00:26:22.158 [2024-07-15 13:25:18.348602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.158 [2024-07-15 13:25:18.364733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:22.158 [2024-07-15 13:25:18.372788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:22.724 13:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:22.724 13:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:26:22.724 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:22.724 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:22.724 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:22.724 [2024-07-15 13:25:19.429900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.724 [2024-07-15 13:25:19.429982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:22.724 [2024-07-15 13:25:19.430005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:22.724 [2024-07-15 13:25:19.430019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.724 [2024-07-15 13:25:19.430059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.724 [2024-07-15 13:25:19.430087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:22.724 [2024-07-15 13:25:19.430101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:22.724 [2024-07-15 13:25:19.430113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.725 [2024-07-15 13:25:19.430166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.725 [2024-07-15 13:25:19.430185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:22.725 [2024-07-15 13:25:19.430198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:22.725 [2024-07-15 13:25:19.430210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.725 [2024-07-15 13:25:19.430294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.386 ms, result 0 00:26:22.725 true 00:26:22.725 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:22.983 { 00:26:22.983 "name": "ftl", 00:26:22.983 "properties": [ 00:26:22.983 { 00:26:22.983 "name": "superblock_version", 00:26:22.983 "value": 5, 00:26:22.983 "read-only": true 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "name": "base_device", 00:26:22.983 "bands": [ 00:26:22.983 { 00:26:22.983 "id": 0, 00:26:22.983 "state": "CLOSED", 00:26:22.983 "validity": 1.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 1, 00:26:22.983 "state": "CLOSED", 00:26:22.983 "validity": 1.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 2, 00:26:22.983 "state": "CLOSED", 00:26:22.983 "validity": 0.007843137254901933 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 3, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 4, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 5, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 6, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 7, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 8, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 9, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 10, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 11, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 12, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 13, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 14, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 15, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 16, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "id": 17, 00:26:22.983 "state": "FREE", 00:26:22.983 "validity": 0.0 00:26:22.983 } 00:26:22.983 ], 00:26:22.983 "read-only": true 00:26:22.983 }, 00:26:22.983 { 00:26:22.983 "name": "cache_device", 00:26:22.983 "type": "bdev", 00:26:22.983 "chunks": [ 00:26:22.983 { 00:26:22.984 "id": 0, 00:26:22.984 "state": "INACTIVE", 00:26:22.984 "utilization": 0.0 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "id": 1, 00:26:22.984 "state": "OPEN", 00:26:22.984 "utilization": 0.0 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "id": 2, 00:26:22.984 "state": "OPEN", 00:26:22.984 "utilization": 0.0 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "id": 3, 00:26:22.984 "state": "FREE", 00:26:22.984 "utilization": 0.0 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "id": 4, 00:26:22.984 "state": "FREE", 00:26:22.984 "utilization": 0.0 00:26:22.984 } 00:26:22.984 ], 00:26:22.984 "read-only": true 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "name": "verbose_mode", 00:26:22.984 "value": true, 00:26:22.984 "unit": "", 00:26:22.984 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:22.984 }, 00:26:22.984 { 00:26:22.984 "name": "prep_upgrade_on_shutdown", 00:26:22.984 "value": false, 00:26:22.984 "unit": "", 00:26:22.984 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:22.984 } 00:26:22.984 ] 00:26:22.984 } 00:26:22.984 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:22.984 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:22.984 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.242 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:23.242 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:23.242 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:23.242 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.242 13:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:23.500 Validate MD5 checksum, iteration 1 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:23.500 13:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.759 [2024-07-15 13:25:20.312597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:23.759 [2024-07-15 13:25:20.312800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95104 ] 00:26:23.759 [2024-07-15 13:25:20.464770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.017 [2024-07-15 13:25:20.568856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.152  Copying: 502/1024 [MB] (502 MBps) Copying: 985/1024 [MB] (483 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:26:27.152 00:26:27.152 13:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:27.152 13:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:29.679 Validate MD5 checksum, iteration 2 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ed88b2c3dcbbab165b9cd8aca0245e8 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ed88b2c3dcbbab165b9cd8aca0245e8 != \4\e\d\8\8\b\2\c\3\d\c\b\b\a\b\1\6\5\b\9\c\d\8\a\c\a\0\2\4\5\e\8 ]] 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:29.679 13:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:29.679 [2024-07-15 13:25:26.173601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:29.679 [2024-07-15 13:25:26.173789] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95170 ] 00:26:29.679 [2024-07-15 13:25:26.321217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.937 [2024-07-15 13:25:26.418396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.143  Copying: 500/1024 [MB] (500 MBps) Copying: 963/1024 [MB] (463 MBps) Copying: 1024/1024 [MB] (average 476 MBps) 00:26:34.143 00:26:34.143 13:25:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:34.143 13:25:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4dec7460871601e9a0b4fb61c50a6343 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4dec7460871601e9a0b4fb61c50a6343 != \4\d\e\c\7\4\6\0\8\7\1\6\0\1\e\9\a\0\b\4\f\b\6\1\c\5\0\a\6\3\4\3 ]] 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95034 ]] 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95034 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95243 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95243 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95243 ']' 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:36.674 13:25:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:36.674 [2024-07-15 13:25:32.914736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:36.674 [2024-07-15 13:25:32.914907] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95243 ] 00:26:36.674 [2024-07-15 13:25:33.059107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 95034 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:36.674 [2024-07-15 13:25:33.157529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.933 [2024-07-15 13:25:33.506943] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:36.933 [2024-07-15 13:25:33.507033] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:36.933 [2024-07-15 13:25:33.650054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.650132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:36.933 [2024-07-15 13:25:33.650169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:36.933 [2024-07-15 13:25:33.650195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.650293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.650313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:36.933 [2024-07-15 13:25:33.650344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:26:36.933 [2024-07-15 13:25:33.650357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.650392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:36.933 [2024-07-15 13:25:33.650749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:36.933 [2024-07-15 13:25:33.650793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.650808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:36.933 [2024-07-15 13:25:33.650821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:26:36.933 [2024-07-15 13:25:33.650833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.651376] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:36.933 [2024-07-15 13:25:33.656265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.656317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:36.933 [2024-07-15 13:25:33.656337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.890 ms 00:26:36.933 [2024-07-15 13:25:33.656367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.657634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.657675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:36.933 [2024-07-15 13:25:33.657693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:36.933 [2024-07-15 13:25:33.657706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.658234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.658280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:36.933 [2024-07-15 13:25:33.658297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:26:36.933 [2024-07-15 13:25:33.658309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.658369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.658387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:36.933 [2024-07-15 13:25:33.658399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:36.933 [2024-07-15 13:25:33.658423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.658479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.658495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:36.933 [2024-07-15 13:25:33.658508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:36.933 [2024-07-15 13:25:33.658520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.658560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:36.933 [2024-07-15 13:25:33.659639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.659676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:36.933 [2024-07-15 13:25:33.659701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.089 ms 00:26:36.933 [2024-07-15 13:25:33.659728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.659771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.659795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:36.933 [2024-07-15 13:25:33.659815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:36.933 [2024-07-15 13:25:33.659827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.659906] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:36.933 [2024-07-15 13:25:33.659940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:36.933 [2024-07-15 13:25:33.659984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:36.933 [2024-07-15 13:25:33.660019] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:36.933 [2024-07-15 13:25:33.660131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:36.933 [2024-07-15 13:25:33.660163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:36.933 [2024-07-15 13:25:33.660180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:36.933 [2024-07-15 13:25:33.660195] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:36.933 [2024-07-15 13:25:33.660209] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:36.933 [2024-07-15 13:25:33.660227] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:36.933 [2024-07-15 13:25:33.660240] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:36.933 [2024-07-15 13:25:33.660251] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:36.933 [2024-07-15 13:25:33.660265] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:36.933 [2024-07-15 13:25:33.660278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.660289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:36.933 [2024-07-15 13:25:33.660301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:26:36.933 [2024-07-15 13:25:33.660312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.660413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.933 [2024-07-15 13:25:33.660428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:36.933 [2024-07-15 13:25:33.660441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:26:36.933 [2024-07-15 13:25:33.660452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.933 [2024-07-15 13:25:33.660574] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:36.933 [2024-07-15 13:25:33.660603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:36.933 [2024-07-15 13:25:33.660629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:36.933 [2024-07-15 13:25:33.660642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.933 [2024-07-15 13:25:33.660659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:36.933 [2024-07-15 13:25:33.660670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:36.933 [2024-07-15 13:25:33.660681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:36.933 [2024-07-15 13:25:33.660692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:36.933 [2024-07-15 13:25:33.660702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:36.933 [2024-07-15 13:25:33.660713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.933 [2024-07-15 13:25:33.660723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:36.933 [2024-07-15 13:25:33.660733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:36.933 [2024-07-15 13:25:33.660744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.933 [2024-07-15 13:25:33.660754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:36.933 [2024-07-15 13:25:33.660765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:36.933 [2024-07-15 13:25:33.660775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.660786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:36.934 [2024-07-15 13:25:33.660796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:36.934 [2024-07-15 13:25:33.660807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.660820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:36.934 [2024-07-15 13:25:33.660834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:36.934 [2024-07-15 13:25:33.660846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:36.934 [2024-07-15 13:25:33.660857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:36.934 [2024-07-15 13:25:33.660868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:36.934 [2024-07-15 13:25:33.660879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:36.934 [2024-07-15 13:25:33.660890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:36.934 [2024-07-15 13:25:33.660901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:36.934 [2024-07-15 13:25:33.660911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:36.934 [2024-07-15 13:25:33.660922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:36.934 [2024-07-15 13:25:33.660932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:36.934 [2024-07-15 13:25:33.660943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:36.934 [2024-07-15 13:25:33.660953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:36.934 [2024-07-15 13:25:33.660964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:36.934 [2024-07-15 13:25:33.660974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.660985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:36.934 [2024-07-15 13:25:33.660995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:36.934 [2024-07-15 13:25:33.661009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.661021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:36.934 [2024-07-15 13:25:33.661032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:36.934 [2024-07-15 13:25:33.661042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.661053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:36.934 [2024-07-15 13:25:33.661064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:36.934 [2024-07-15 13:25:33.661075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.661085] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:36.934 [2024-07-15 13:25:33.661097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:36.934 [2024-07-15 13:25:33.661108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:36.934 [2024-07-15 13:25:33.661119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.934 [2024-07-15 13:25:33.661131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:36.934 [2024-07-15 13:25:33.661156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:36.934 [2024-07-15 13:25:33.661171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:36.934 [2024-07-15 13:25:33.661183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:36.934 [2024-07-15 13:25:33.661194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:36.934 [2024-07-15 13:25:33.661212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:36.934 [2024-07-15 13:25:33.661226] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:36.934 [2024-07-15 13:25:33.661240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:36.934 [2024-07-15 13:25:33.661265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:36.934 [2024-07-15 13:25:33.661299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:36.934 [2024-07-15 13:25:33.661311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:36.934 [2024-07-15 13:25:33.661322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:36.934 [2024-07-15 13:25:33.661334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:36.934 [2024-07-15 13:25:33.661419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:36.934 [2024-07-15 13:25:33.661436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:36.934 [2024-07-15 13:25:33.661460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:36.934 [2024-07-15 13:25:33.661471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:36.934 [2024-07-15 13:25:33.661483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:36.934 [2024-07-15 13:25:33.661496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.934 [2024-07-15 13:25:33.661507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:36.934 [2024-07-15 13:25:33.661530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.986 ms 00:26:36.934 [2024-07-15 13:25:33.661542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.674954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.675017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:37.193 [2024-07-15 13:25:33.675040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.330 ms 00:26:37.193 [2024-07-15 13:25:33.675053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.675135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.675182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:37.193 [2024-07-15 13:25:33.675199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:37.193 [2024-07-15 13:25:33.675211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.689129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.689201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:37.193 [2024-07-15 13:25:33.689229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.801 ms 00:26:37.193 [2024-07-15 13:25:33.689243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.689330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.689357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:37.193 [2024-07-15 13:25:33.689372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:37.193 [2024-07-15 13:25:33.689384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.689539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.689558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:37.193 [2024-07-15 13:25:33.689571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:26:37.193 [2024-07-15 13:25:33.689585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.689652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.689668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:37.193 [2024-07-15 13:25:33.689681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:37.193 [2024-07-15 13:25:33.689693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.699373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.699435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:37.193 [2024-07-15 13:25:33.699455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.639 ms 00:26:37.193 [2024-07-15 13:25:33.699468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.699703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.699728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:37.193 [2024-07-15 13:25:33.699751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:37.193 [2024-07-15 13:25:33.699763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.714576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.714647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:37.193 [2024-07-15 13:25:33.714671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.776 ms 00:26:37.193 [2024-07-15 13:25:33.714691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.716842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.716882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:37.193 [2024-07-15 13:25:33.716900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:26:37.193 [2024-07-15 13:25:33.716918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.739926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.740001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:37.193 [2024-07-15 13:25:33.740025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.958 ms 00:26:37.193 [2024-07-15 13:25:33.740039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.740326] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:37.193 [2024-07-15 13:25:33.740477] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:37.193 [2024-07-15 13:25:33.740608] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:37.193 [2024-07-15 13:25:33.740741] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:37.193 [2024-07-15 13:25:33.740760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.740773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:37.193 [2024-07-15 13:25:33.740798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:26:37.193 [2024-07-15 13:25:33.740810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.740915] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:37.193 [2024-07-15 13:25:33.740950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.740963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:37.193 [2024-07-15 13:25:33.740976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:37.193 [2024-07-15 13:25:33.740989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.744677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.744724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:37.193 [2024-07-15 13:25:33.744746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.653 ms 00:26:37.193 [2024-07-15 13:25:33.744761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.745622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.193 [2024-07-15 13:25:33.745658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:37.193 [2024-07-15 13:25:33.745675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:37.193 [2024-07-15 13:25:33.745687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.193 [2024-07-15 13:25:33.746000] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:37.760 [2024-07-15 13:25:34.263659] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:37.760 [2024-07-15 13:25:34.263859] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:38.326 [2024-07-15 13:25:34.780274] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:38.326 [2024-07-15 13:25:34.780418] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:38.326 [2024-07-15 13:25:34.780443] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:38.326 [2024-07-15 13:25:34.780460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.780475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:38.326 [2024-07-15 13:25:34.780493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1034.702 ms 00:26:38.326 [2024-07-15 13:25:34.780527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.780587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.780602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:38.326 [2024-07-15 13:25:34.780624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:38.326 [2024-07-15 13:25:34.780636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.790574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:38.326 [2024-07-15 13:25:34.790780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.790801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:38.326 [2024-07-15 13:25:34.790834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.118 ms 00:26:38.326 [2024-07-15 13:25:34.790847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.791665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.791693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:38.326 [2024-07-15 13:25:34.791709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:26:38.326 [2024-07-15 13:25:34.791733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:38.326 [2024-07-15 13:25:34.794217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.404 ms 00:26:38.326 [2024-07-15 13:25:34.794230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:38.326 [2024-07-15 13:25:34.794332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:38.326 [2024-07-15 13:25:34.794344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:38.326 [2024-07-15 13:25:34.794540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:38.326 [2024-07-15 13:25:34.794561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:38.326 [2024-07-15 13:25:34.794644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:38.326 [2024-07-15 13:25:34.794657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:38.326 [2024-07-15 13:25:34.794728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:38.326 [2024-07-15 13:25:34.794752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:38.326 [2024-07-15 13:25:34.794764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.794835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.326 [2024-07-15 13:25:34.794852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:38.326 [2024-07-15 13:25:34.794874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:38.326 [2024-07-15 13:25:34.794886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.326 [2024-07-15 13:25:34.796334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1145.740 ms, result 0 00:26:38.326 [2024-07-15 13:25:34.811580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.327 [2024-07-15 13:25:34.827636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:38.327 [2024-07-15 13:25:34.835744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:38.327 Validate MD5 checksum, iteration 1 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:38.327 13:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.327 [2024-07-15 13:25:34.966012] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:38.327 [2024-07-15 13:25:34.966266] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95275 ] 00:26:38.584 [2024-07-15 13:25:35.117806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.585 [2024-07-15 13:25:35.223224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.700  Copying: 482/1024 [MB] (482 MBps) Copying: 953/1024 [MB] (471 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:26:43.700 00:26:43.700 13:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:43.700 13:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:45.598 Validate MD5 checksum, iteration 2 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ed88b2c3dcbbab165b9cd8aca0245e8 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ed88b2c3dcbbab165b9cd8aca0245e8 != \4\e\d\8\8\b\2\c\3\d\c\b\b\a\b\1\6\5\b\9\c\d\8\a\c\a\0\2\4\5\e\8 ]] 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:45.598 13:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:45.855 [2024-07-15 13:25:42.398688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:45.855 [2024-07-15 13:25:42.398853] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95353 ] 00:26:45.855 [2024-07-15 13:25:42.541895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.113 [2024-07-15 13:25:42.639803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.625  Copying: 451/1024 [MB] (451 MBps) Copying: 895/1024 [MB] (444 MBps) Copying: 1024/1024 [MB] (average 445 MBps) 00:26:49.625 00:26:49.625 13:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:49.625 13:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4dec7460871601e9a0b4fb61c50a6343 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4dec7460871601e9a0b4fb61c50a6343 != \4\d\e\c\7\4\6\0\8\7\1\6\0\1\e\9\a\0\b\4\f\b\6\1\c\5\0\a\6\3\4\3 ]] 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95243 ]] 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95243 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 95243 ']' 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 95243 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95243 00:26:52.154 killing process with pid 95243 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95243' 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 95243 00:26:52.154 13:25:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 95243 00:26:52.413 [2024-07-15 13:25:48.894112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:52.413 [2024-07-15 13:25:48.899688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.899759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:52.413 [2024-07-15 13:25:48.899782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:52.413 [2024-07-15 13:25:48.899796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.899831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:52.413 [2024-07-15 13:25:48.900669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.900701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:52.413 [2024-07-15 13:25:48.900717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.816 ms 00:26:52.413 [2024-07-15 13:25:48.900730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.901010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.901036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:52.413 [2024-07-15 13:25:48.901050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:26:52.413 [2024-07-15 13:25:48.901067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.902290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.902329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:52.413 [2024-07-15 13:25:48.902346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.196 ms 00:26:52.413 [2024-07-15 13:25:48.902358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.903570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.903598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:52.413 [2024-07-15 13:25:48.903613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.166 ms 00:26:52.413 [2024-07-15 13:25:48.903625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.905041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.905084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:52.413 [2024-07-15 13:25:48.905101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.360 ms 00:26:52.413 [2024-07-15 13:25:48.905113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.906482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.906518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:52.413 [2024-07-15 13:25:48.906535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.308 ms 00:26:52.413 [2024-07-15 13:25:48.906555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.906655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.906674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:52.413 [2024-07-15 13:25:48.906687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:52.413 [2024-07-15 13:25:48.906712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.908027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.908065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:52.413 [2024-07-15 13:25:48.908080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.289 ms 00:26:52.413 [2024-07-15 13:25:48.908091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.909270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.909313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:52.413 [2024-07-15 13:25:48.909328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.139 ms 00:26:52.413 [2024-07-15 13:25:48.909339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.910563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.910603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:52.413 [2024-07-15 13:25:48.910619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:26:52.413 [2024-07-15 13:25:48.910631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.911756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.911793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:52.413 [2024-07-15 13:25:48.911809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:26:52.413 [2024-07-15 13:25:48.911820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.911861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:52.413 [2024-07-15 13:25:48.911885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:52.413 [2024-07-15 13:25:48.911900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:52.413 [2024-07-15 13:25:48.911913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:52.413 [2024-07-15 13:25:48.911926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.911998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:52.413 [2024-07-15 13:25:48.912107] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:52.413 [2024-07-15 13:25:48.912119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 64caf7b5-2004-41b7-a408-7289ffbc411e 00:26:52.413 [2024-07-15 13:25:48.912139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:52.413 [2024-07-15 13:25:48.912170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:52.413 [2024-07-15 13:25:48.912182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:52.413 [2024-07-15 13:25:48.912194] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:52.413 [2024-07-15 13:25:48.912205] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:52.413 [2024-07-15 13:25:48.912217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:52.413 [2024-07-15 13:25:48.912228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:52.413 [2024-07-15 13:25:48.912238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:52.413 [2024-07-15 13:25:48.912249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:52.413 [2024-07-15 13:25:48.912260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.912273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:52.413 [2024-07-15 13:25:48.912286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:26:52.413 [2024-07-15 13:25:48.912297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.914456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.914500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:52.413 [2024-07-15 13:25:48.914517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.132 ms 00:26:52.413 [2024-07-15 13:25:48.914529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.914674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.413 [2024-07-15 13:25:48.914690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:52.413 [2024-07-15 13:25:48.914703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:26:52.413 [2024-07-15 13:25:48.914714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.923208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.923279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:52.413 [2024-07-15 13:25:48.923301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.923313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.923380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.923396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:52.413 [2024-07-15 13:25:48.923409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.923420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.923531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.923552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:52.413 [2024-07-15 13:25:48.923573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.923593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.923621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.923635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:52.413 [2024-07-15 13:25:48.923647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.923659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.940657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.940725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:52.413 [2024-07-15 13:25:48.940746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.940759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:52.413 [2024-07-15 13:25:48.951251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:52.413 [2024-07-15 13:25:48.951418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:52.413 [2024-07-15 13:25:48.951523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:52.413 [2024-07-15 13:25:48.951672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:52.413 [2024-07-15 13:25:48.951772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:52.413 [2024-07-15 13:25:48.951876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.951958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:52.413 [2024-07-15 13:25:48.951974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:52.413 [2024-07-15 13:25:48.951987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:52.413 [2024-07-15 13:25:48.951998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.413 [2024-07-15 13:25:48.952186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 52.438 ms, result 0 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:52.672 Remove shared memory files 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95034 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:52.672 ************************************ 00:26:52.672 END TEST ftl_upgrade_shutdown 00:26:52.672 ************************************ 00:26:52.672 00:26:52.672 real 1m19.869s 00:26:52.672 user 1m49.647s 00:26:52.672 sys 0m23.333s 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:52.672 13:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:52.672 13:25:49 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:52.672 13:25:49 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:52.672 13:25:49 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:26:52.672 13:25:49 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:52.672 13:25:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:52.672 ************************************ 00:26:52.672 START TEST ftl_restore_fast 00:26:52.672 ************************************ 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:52.672 * Looking for test storage... 00:26:52.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8igas9DHPU 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95491 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95491 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 95491 ']' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:52.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:52.672 13:25:49 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:52.930 [2024-07-15 13:25:49.517125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:52.930 [2024-07-15 13:25:49.517333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95491 ] 00:26:52.930 [2024-07-15 13:25:49.664822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.187 [2024-07-15 13:25:49.763246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:53.753 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:54.032 13:25:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:54.605 { 00:26:54.605 "name": "nvme0n1", 00:26:54.605 "aliases": [ 00:26:54.605 "146aa11e-2426-4c0a-bdac-9ed952c6036a" 00:26:54.605 ], 00:26:54.605 "product_name": "NVMe disk", 00:26:54.605 "block_size": 4096, 00:26:54.605 "num_blocks": 1310720, 00:26:54.605 "uuid": "146aa11e-2426-4c0a-bdac-9ed952c6036a", 00:26:54.605 "assigned_rate_limits": { 00:26:54.605 "rw_ios_per_sec": 0, 00:26:54.605 "rw_mbytes_per_sec": 0, 00:26:54.605 "r_mbytes_per_sec": 0, 00:26:54.605 "w_mbytes_per_sec": 0 00:26:54.605 }, 00:26:54.605 "claimed": true, 00:26:54.605 "claim_type": "read_many_write_one", 00:26:54.605 "zoned": false, 00:26:54.605 "supported_io_types": { 00:26:54.605 "read": true, 00:26:54.605 "write": true, 00:26:54.605 "unmap": true, 00:26:54.605 "write_zeroes": true, 00:26:54.605 "flush": true, 00:26:54.605 "reset": true, 00:26:54.605 "compare": true, 00:26:54.605 "compare_and_write": false, 00:26:54.605 "abort": true, 00:26:54.605 "nvme_admin": true, 00:26:54.605 "nvme_io": true 00:26:54.605 }, 00:26:54.605 "driver_specific": { 00:26:54.605 "nvme": [ 00:26:54.605 { 00:26:54.605 "pci_address": "0000:00:11.0", 00:26:54.605 "trid": { 00:26:54.605 "trtype": "PCIe", 00:26:54.605 "traddr": "0000:00:11.0" 00:26:54.605 }, 00:26:54.605 "ctrlr_data": { 00:26:54.605 "cntlid": 0, 00:26:54.605 "vendor_id": "0x1b36", 00:26:54.605 "model_number": "QEMU NVMe Ctrl", 00:26:54.605 "serial_number": "12341", 00:26:54.605 "firmware_revision": "8.0.0", 00:26:54.605 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:54.605 "oacs": { 00:26:54.605 "security": 0, 00:26:54.605 "format": 1, 00:26:54.605 "firmware": 0, 00:26:54.605 "ns_manage": 1 00:26:54.605 }, 00:26:54.605 "multi_ctrlr": false, 00:26:54.605 "ana_reporting": false 00:26:54.605 }, 00:26:54.605 "vs": { 00:26:54.605 "nvme_version": "1.4" 00:26:54.605 }, 00:26:54.605 "ns_data": { 00:26:54.605 "id": 1, 00:26:54.605 "can_share": false 00:26:54.605 } 00:26:54.605 } 00:26:54.605 ], 00:26:54.605 "mp_policy": "active_passive" 00:26:54.605 } 00:26:54.605 } 00:26:54.605 ]' 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:54.605 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:54.864 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=ba446a15-97a6-4ff0-aa53-722c526778c3 00:26:54.864 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:54.864 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba446a15-97a6-4ff0-aa53-722c526778c3 00:26:55.122 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:55.381 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=719d2675-758c-4d74-9248-228f377ae501 00:26:55.381 13:25:51 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 719d2675-758c-4d74-9248-228f377ae501 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:55.639 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:55.897 { 00:26:55.897 "name": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:55.897 "aliases": [ 00:26:55.897 "lvs/nvme0n1p0" 00:26:55.897 ], 00:26:55.897 "product_name": "Logical Volume", 00:26:55.897 "block_size": 4096, 00:26:55.897 "num_blocks": 26476544, 00:26:55.897 "uuid": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:55.897 "assigned_rate_limits": { 00:26:55.897 "rw_ios_per_sec": 0, 00:26:55.897 "rw_mbytes_per_sec": 0, 00:26:55.897 "r_mbytes_per_sec": 0, 00:26:55.897 "w_mbytes_per_sec": 0 00:26:55.897 }, 00:26:55.897 "claimed": false, 00:26:55.897 "zoned": false, 00:26:55.897 "supported_io_types": { 00:26:55.897 "read": true, 00:26:55.897 "write": true, 00:26:55.897 "unmap": true, 00:26:55.897 "write_zeroes": true, 00:26:55.897 "flush": false, 00:26:55.897 "reset": true, 00:26:55.897 "compare": false, 00:26:55.897 "compare_and_write": false, 00:26:55.897 "abort": false, 00:26:55.897 "nvme_admin": false, 00:26:55.897 "nvme_io": false 00:26:55.897 }, 00:26:55.897 "driver_specific": { 00:26:55.897 "lvol": { 00:26:55.897 "lvol_store_uuid": "719d2675-758c-4d74-9248-228f377ae501", 00:26:55.897 "base_bdev": "nvme0n1", 00:26:55.897 "thin_provision": true, 00:26:55.897 "num_allocated_clusters": 0, 00:26:55.897 "snapshot": false, 00:26:55.897 "clone": false, 00:26:55.897 "esnap_clone": false 00:26:55.897 } 00:26:55.897 } 00:26:55.897 } 00:26:55.897 ]' 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:55.897 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:56.155 13:25:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:56.412 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:56.412 { 00:26:56.412 "name": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:56.413 "aliases": [ 00:26:56.413 "lvs/nvme0n1p0" 00:26:56.413 ], 00:26:56.413 "product_name": "Logical Volume", 00:26:56.413 "block_size": 4096, 00:26:56.413 "num_blocks": 26476544, 00:26:56.413 "uuid": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:56.413 "assigned_rate_limits": { 00:26:56.413 "rw_ios_per_sec": 0, 00:26:56.413 "rw_mbytes_per_sec": 0, 00:26:56.413 "r_mbytes_per_sec": 0, 00:26:56.413 "w_mbytes_per_sec": 0 00:26:56.413 }, 00:26:56.413 "claimed": false, 00:26:56.413 "zoned": false, 00:26:56.413 "supported_io_types": { 00:26:56.413 "read": true, 00:26:56.413 "write": true, 00:26:56.413 "unmap": true, 00:26:56.413 "write_zeroes": true, 00:26:56.413 "flush": false, 00:26:56.413 "reset": true, 00:26:56.413 "compare": false, 00:26:56.413 "compare_and_write": false, 00:26:56.413 "abort": false, 00:26:56.413 "nvme_admin": false, 00:26:56.413 "nvme_io": false 00:26:56.413 }, 00:26:56.413 "driver_specific": { 00:26:56.413 "lvol": { 00:26:56.413 "lvol_store_uuid": "719d2675-758c-4d74-9248-228f377ae501", 00:26:56.413 "base_bdev": "nvme0n1", 00:26:56.413 "thin_provision": true, 00:26:56.413 "num_allocated_clusters": 0, 00:26:56.413 "snapshot": false, 00:26:56.413 "clone": false, 00:26:56.413 "esnap_clone": false 00:26:56.413 } 00:26:56.413 } 00:26:56.413 } 00:26:56.413 ]' 00:26:56.413 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:56.670 13:25:53 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:56.927 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:57.186 { 00:26:57.186 "name": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:57.186 "aliases": [ 00:26:57.186 "lvs/nvme0n1p0" 00:26:57.186 ], 00:26:57.186 "product_name": "Logical Volume", 00:26:57.186 "block_size": 4096, 00:26:57.186 "num_blocks": 26476544, 00:26:57.186 "uuid": "277bda38-e7b4-4c7d-a5b2-9e43c41671c8", 00:26:57.186 "assigned_rate_limits": { 00:26:57.186 "rw_ios_per_sec": 0, 00:26:57.186 "rw_mbytes_per_sec": 0, 00:26:57.186 "r_mbytes_per_sec": 0, 00:26:57.186 "w_mbytes_per_sec": 0 00:26:57.186 }, 00:26:57.186 "claimed": false, 00:26:57.186 "zoned": false, 00:26:57.186 "supported_io_types": { 00:26:57.186 "read": true, 00:26:57.186 "write": true, 00:26:57.186 "unmap": true, 00:26:57.186 "write_zeroes": true, 00:26:57.186 "flush": false, 00:26:57.186 "reset": true, 00:26:57.186 "compare": false, 00:26:57.186 "compare_and_write": false, 00:26:57.186 "abort": false, 00:26:57.186 "nvme_admin": false, 00:26:57.186 "nvme_io": false 00:26:57.186 }, 00:26:57.186 "driver_specific": { 00:26:57.186 "lvol": { 00:26:57.186 "lvol_store_uuid": "719d2675-758c-4d74-9248-228f377ae501", 00:26:57.186 "base_bdev": "nvme0n1", 00:26:57.186 "thin_provision": true, 00:26:57.186 "num_allocated_clusters": 0, 00:26:57.186 "snapshot": false, 00:26:57.186 "clone": false, 00:26:57.186 "esnap_clone": false 00:26:57.186 } 00:26:57.186 } 00:26:57.186 } 00:26:57.186 ]' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 --l2p_dram_limit 10' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:57.186 13:25:53 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 277bda38-e7b4-4c7d-a5b2-9e43c41671c8 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:57.445 [2024-07-15 13:25:54.099056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.099131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:57.445 [2024-07-15 13:25:54.099172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:57.445 [2024-07-15 13:25:54.099187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.099273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.099292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.445 [2024-07-15 13:25:54.099308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:57.445 [2024-07-15 13:25:54.099322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.099358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:57.445 [2024-07-15 13:25:54.099737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:57.445 [2024-07-15 13:25:54.099779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.099797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.445 [2024-07-15 13:25:54.099814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:26:57.445 [2024-07-15 13:25:54.099826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.100060] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:26:57.445 [2024-07-15 13:25:54.101897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.101942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:57.445 [2024-07-15 13:25:54.101960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:57.445 [2024-07-15 13:25:54.101979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.111746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.111823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.445 [2024-07-15 13:25:54.111844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.701 ms 00:26:57.445 [2024-07-15 13:25:54.111869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.111995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.112024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.445 [2024-07-15 13:25:54.112038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:57.445 [2024-07-15 13:25:54.112052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.112204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.112228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:57.445 [2024-07-15 13:25:54.112242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:57.445 [2024-07-15 13:25:54.112257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.112295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:57.445 [2024-07-15 13:25:54.114602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.114651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.445 [2024-07-15 13:25:54.114671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:26:57.445 [2024-07-15 13:25:54.114683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.114735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.114750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:57.445 [2024-07-15 13:25:54.114766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.445 [2024-07-15 13:25:54.114777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.114811] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:57.445 [2024-07-15 13:25:54.114972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:57.445 [2024-07-15 13:25:54.114995] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:57.445 [2024-07-15 13:25:54.115012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:57.445 [2024-07-15 13:25:54.115029] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115043] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115058] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:57.445 [2024-07-15 13:25:54.115069] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:57.445 [2024-07-15 13:25:54.115085] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:57.445 [2024-07-15 13:25:54.115097] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:57.445 [2024-07-15 13:25:54.115111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.115123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:57.445 [2024-07-15 13:25:54.115137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:26:57.445 [2024-07-15 13:25:54.115165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.115274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.445 [2024-07-15 13:25:54.115290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:57.445 [2024-07-15 13:25:54.115307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:57.445 [2024-07-15 13:25:54.115319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.445 [2024-07-15 13:25:54.115433] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:57.445 [2024-07-15 13:25:54.115449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:57.445 [2024-07-15 13:25:54.115475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:57.445 [2024-07-15 13:25:54.115515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:57.445 [2024-07-15 13:25:54.115553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.445 [2024-07-15 13:25:54.115576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:57.445 [2024-07-15 13:25:54.115587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:57.445 [2024-07-15 13:25:54.115599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.445 [2024-07-15 13:25:54.115611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:57.445 [2024-07-15 13:25:54.115627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:57.445 [2024-07-15 13:25:54.115637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:57.445 [2024-07-15 13:25:54.115661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:57.445 [2024-07-15 13:25:54.115697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:57.445 [2024-07-15 13:25:54.115708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.445 [2024-07-15 13:25:54.115722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:57.445 [2024-07-15 13:25:54.115734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.446 [2024-07-15 13:25:54.115758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:57.446 [2024-07-15 13:25:54.115771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.446 [2024-07-15 13:25:54.115794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:57.446 [2024-07-15 13:25:54.115805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.446 [2024-07-15 13:25:54.115832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:57.446 [2024-07-15 13:25:54.115845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.446 [2024-07-15 13:25:54.115870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:57.446 [2024-07-15 13:25:54.115881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:57.446 [2024-07-15 13:25:54.115894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.446 [2024-07-15 13:25:54.115904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:57.446 [2024-07-15 13:25:54.115918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:57.446 [2024-07-15 13:25:54.115929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:57.446 [2024-07-15 13:25:54.115953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:57.446 [2024-07-15 13:25:54.115966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.446 [2024-07-15 13:25:54.115977] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:57.446 [2024-07-15 13:25:54.115991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:57.446 [2024-07-15 13:25:54.116003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.446 [2024-07-15 13:25:54.116019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.446 [2024-07-15 13:25:54.116034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:57.446 [2024-07-15 13:25:54.116048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:57.446 [2024-07-15 13:25:54.116059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:57.446 [2024-07-15 13:25:54.116072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:57.446 [2024-07-15 13:25:54.116082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:57.446 [2024-07-15 13:25:54.116096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:57.446 [2024-07-15 13:25:54.116112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:57.446 [2024-07-15 13:25:54.116131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:57.446 [2024-07-15 13:25:54.116191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:57.446 [2024-07-15 13:25:54.116203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:57.446 [2024-07-15 13:25:54.116217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:57.446 [2024-07-15 13:25:54.116229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:57.446 [2024-07-15 13:25:54.116243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:57.446 [2024-07-15 13:25:54.116254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:57.446 [2024-07-15 13:25:54.116271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:57.446 [2024-07-15 13:25:54.116283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:57.446 [2024-07-15 13:25:54.116296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:57.446 [2024-07-15 13:25:54.116367] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:57.446 [2024-07-15 13:25:54.116383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.446 [2024-07-15 13:25:54.116419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:57.446 [2024-07-15 13:25:54.116432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:57.446 [2024-07-15 13:25:54.116446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:57.446 [2024-07-15 13:25:54.116466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.446 [2024-07-15 13:25:54.116483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:57.446 [2024-07-15 13:25:54.116496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:26:57.446 [2024-07-15 13:25:54.116512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.446 [2024-07-15 13:25:54.116597] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:57.446 [2024-07-15 13:25:54.116633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:59.973 [2024-07-15 13:25:56.555392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.973 [2024-07-15 13:25:56.555526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:59.973 [2024-07-15 13:25:56.555566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2438.803 ms 00:26:59.973 [2024-07-15 13:25:56.555617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.973 [2024-07-15 13:25:56.573228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.973 [2024-07-15 13:25:56.573332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:59.973 [2024-07-15 13:25:56.573371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.422 ms 00:26:59.973 [2024-07-15 13:25:56.573399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.973 [2024-07-15 13:25:56.573618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.973 [2024-07-15 13:25:56.573665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:59.973 [2024-07-15 13:25:56.573690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:26:59.973 [2024-07-15 13:25:56.573735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.973 [2024-07-15 13:25:56.587580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.973 [2024-07-15 13:25:56.587657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:59.973 [2024-07-15 13:25:56.587680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.717 ms 00:26:59.973 [2024-07-15 13:25:56.587695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.973 [2024-07-15 13:25:56.587764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.587794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:59.974 [2024-07-15 13:25:56.587814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:59.974 [2024-07-15 13:25:56.587828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.588489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.588523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:59.974 [2024-07-15 13:25:56.588546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:26:59.974 [2024-07-15 13:25:56.588561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.588730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.588758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:59.974 [2024-07-15 13:25:56.588772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:26:59.974 [2024-07-15 13:25:56.588797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.599265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.599372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:59.974 [2024-07-15 13:25:56.599411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.430 ms 00:26:59.974 [2024-07-15 13:25:56.599441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.611659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:59.974 [2024-07-15 13:25:56.615901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.615950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:59.974 [2024-07-15 13:25:56.615975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.231 ms 00:26:59.974 [2024-07-15 13:25:56.615989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.687096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.687201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:59.974 [2024-07-15 13:25:56.687229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.017 ms 00:26:59.974 [2024-07-15 13:25:56.687247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.687495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.687532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:59.974 [2024-07-15 13:25:56.687551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:26:59.974 [2024-07-15 13:25:56.687564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.691319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.691367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:59.974 [2024-07-15 13:25:56.691399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:26:59.974 [2024-07-15 13:25:56.691415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.694562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.694606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:59.974 [2024-07-15 13:25:56.694628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:26:59.974 [2024-07-15 13:25:56.694641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.974 [2024-07-15 13:25:56.695079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.974 [2024-07-15 13:25:56.695112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:59.974 [2024-07-15 13:25:56.695131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:26:59.974 [2024-07-15 13:25:56.695159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.730544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.730624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:00.231 [2024-07-15 13:25:56.730652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.335 ms 00:27:00.231 [2024-07-15 13:25:56.730669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.736007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.736077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:00.231 [2024-07-15 13:25:56.736102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:27:00.231 [2024-07-15 13:25:56.736116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.739917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.739966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:00.231 [2024-07-15 13:25:56.739989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:27:00.231 [2024-07-15 13:25:56.740001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.743879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.743928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:00.231 [2024-07-15 13:25:56.743951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:27:00.231 [2024-07-15 13:25:56.743964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.744033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.744055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:00.231 [2024-07-15 13:25:56.744072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:00.231 [2024-07-15 13:25:56.744084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.744269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.231 [2024-07-15 13:25:56.744291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:00.231 [2024-07-15 13:25:56.744308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:00.231 [2024-07-15 13:25:56.744321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.231 [2024-07-15 13:25:56.745628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2646.083 ms, result 0 00:27:00.231 { 00:27:00.231 "name": "ftl0", 00:27:00.231 "uuid": "58ef9247-ca11-49fb-b5ae-1b35b173dc4e" 00:27:00.231 } 00:27:00.231 13:25:56 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:00.231 13:25:56 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:00.488 13:25:57 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:00.488 13:25:57 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:00.748 [2024-07-15 13:25:57.283919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.283999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.748 [2024-07-15 13:25:57.284023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:00.748 [2024-07-15 13:25:57.284053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.284094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.748 [2024-07-15 13:25:57.284978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.285014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.748 [2024-07-15 13:25:57.285037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:27:00.748 [2024-07-15 13:25:57.285049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.285417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.285459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.748 [2024-07-15 13:25:57.285479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:27:00.748 [2024-07-15 13:25:57.285491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.288735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.288769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.748 [2024-07-15 13:25:57.288787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:27:00.748 [2024-07-15 13:25:57.288799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.295268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.295322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.748 [2024-07-15 13:25:57.295357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.431 ms 00:27:00.748 [2024-07-15 13:25:57.295370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.297409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.297456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.748 [2024-07-15 13:25:57.297483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:27:00.748 [2024-07-15 13:25:57.297495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.302315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.302391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.748 [2024-07-15 13:25:57.302420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:27:00.748 [2024-07-15 13:25:57.302434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.302604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.302623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.748 [2024-07-15 13:25:57.302640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:27:00.748 [2024-07-15 13:25:57.302656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.304466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.304506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:00.748 [2024-07-15 13:25:57.304527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:27:00.748 [2024-07-15 13:25:57.304539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.306090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.306130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:00.748 [2024-07-15 13:25:57.306175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:27:00.748 [2024-07-15 13:25:57.306191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.307427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.307468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.748 [2024-07-15 13:25:57.307487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:27:00.748 [2024-07-15 13:25:57.307499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.308848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-07-15 13:25:57.308887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.748 [2024-07-15 13:25:57.308907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:27:00.748 [2024-07-15 13:25:57.308919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.748 [2024-07-15 13:25:57.308968] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.748 [2024-07-15 13:25:57.308991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.748 [2024-07-15 13:25:57.309543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.309995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.749 [2024-07-15 13:25:57.310440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.749 [2024-07-15 13:25:57.310458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:27:00.749 [2024-07-15 13:25:57.310470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:00.749 [2024-07-15 13:25:57.310486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:00.749 [2024-07-15 13:25:57.310497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:00.749 [2024-07-15 13:25:57.310512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:00.749 [2024-07-15 13:25:57.310523] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.749 [2024-07-15 13:25:57.310538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.749 [2024-07-15 13:25:57.310553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.749 [2024-07-15 13:25:57.310565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.749 [2024-07-15 13:25:57.310575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.749 [2024-07-15 13:25:57.310589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.749 [2024-07-15 13:25:57.310611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.749 [2024-07-15 13:25:57.310627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:27:00.749 [2024-07-15 13:25:57.310639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.312858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.749 [2024-07-15 13:25:57.312897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.749 [2024-07-15 13:25:57.312920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:27:00.749 [2024-07-15 13:25:57.312932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.313080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.749 [2024-07-15 13:25:57.313105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.749 [2024-07-15 13:25:57.313121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:00.749 [2024-07-15 13:25:57.313133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.321683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.749 [2024-07-15 13:25:57.321767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.749 [2024-07-15 13:25:57.321794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.749 [2024-07-15 13:25:57.321820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.321931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.749 [2024-07-15 13:25:57.321959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.749 [2024-07-15 13:25:57.321984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.749 [2024-07-15 13:25:57.321996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.322185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.749 [2024-07-15 13:25:57.322207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.749 [2024-07-15 13:25:57.322227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.749 [2024-07-15 13:25:57.322239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.322275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.749 [2024-07-15 13:25:57.322288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.749 [2024-07-15 13:25:57.322304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.749 [2024-07-15 13:25:57.322315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.749 [2024-07-15 13:25:57.339549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.749 [2024-07-15 13:25:57.339638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.749 [2024-07-15 13:25:57.339673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.749 [2024-07-15 13:25:57.339686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.349982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.750 [2024-07-15 13:25:57.350106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.750 [2024-07-15 13:25:57.350300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.750 [2024-07-15 13:25:57.350425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.750 [2024-07-15 13:25:57.350583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:00.750 [2024-07-15 13:25:57.350711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.750 [2024-07-15 13:25:57.350816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.350891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.750 [2024-07-15 13:25:57.350910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.750 [2024-07-15 13:25:57.350925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.750 [2024-07-15 13:25:57.350937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.750 [2024-07-15 13:25:57.351119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.144 ms, result 0 00:27:00.750 true 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95491 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95491 ']' 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95491 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95491 00:27:00.750 killing process with pid 95491 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95491' 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 95491 00:27:00.750 13:25:57 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 95491 00:27:04.035 13:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:09.297 262144+0 records in 00:27:09.297 262144+0 records out 00:27:09.297 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.67832 s, 230 MB/s 00:27:09.297 13:26:05 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:11.194 13:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:11.194 [2024-07-15 13:26:07.617112] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:11.194 [2024-07-15 13:26:07.617357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95710 ] 00:27:11.194 [2024-07-15 13:26:07.769978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.194 [2024-07-15 13:26:07.876409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.454 [2024-07-15 13:26:08.006661] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:11.454 [2024-07-15 13:26:08.006748] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:11.454 [2024-07-15 13:26:08.161293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.161380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:11.454 [2024-07-15 13:26:08.161414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:11.454 [2024-07-15 13:26:08.161426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.161530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.161552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.454 [2024-07-15 13:26:08.161573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:11.454 [2024-07-15 13:26:08.161597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.161648] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:11.454 [2024-07-15 13:26:08.162014] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:11.454 [2024-07-15 13:26:08.162061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.162083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.454 [2024-07-15 13:26:08.162096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:27:11.454 [2024-07-15 13:26:08.162107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.164091] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:11.454 [2024-07-15 13:26:08.167078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.167128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:11.454 [2024-07-15 13:26:08.167168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:27:11.454 [2024-07-15 13:26:08.167194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.167274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.167295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:11.454 [2024-07-15 13:26:08.167308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:11.454 [2024-07-15 13:26:08.167329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.175931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.176004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.454 [2024-07-15 13:26:08.176024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.518 ms 00:27:11.454 [2024-07-15 13:26:08.176046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.176219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.176242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.454 [2024-07-15 13:26:08.176255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:27:11.454 [2024-07-15 13:26:08.176266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.176376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.176396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:11.454 [2024-07-15 13:26:08.176420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:11.454 [2024-07-15 13:26:08.176441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.176487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:11.454 [2024-07-15 13:26:08.178603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.178642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.454 [2024-07-15 13:26:08.178658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:27:11.454 [2024-07-15 13:26:08.178669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.178736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.178754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:11.454 [2024-07-15 13:26:08.178774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:11.454 [2024-07-15 13:26:08.178796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.178840] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:11.454 [2024-07-15 13:26:08.178874] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:11.454 [2024-07-15 13:26:08.178934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:11.454 [2024-07-15 13:26:08.178970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:11.454 [2024-07-15 13:26:08.179077] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:11.454 [2024-07-15 13:26:08.179117] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:11.454 [2024-07-15 13:26:08.179134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:11.454 [2024-07-15 13:26:08.179165] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179180] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179193] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:11.454 [2024-07-15 13:26:08.179204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:11.454 [2024-07-15 13:26:08.179215] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:11.454 [2024-07-15 13:26:08.179226] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:11.454 [2024-07-15 13:26:08.179239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.179250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:11.454 [2024-07-15 13:26:08.179262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:11.454 [2024-07-15 13:26:08.179278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.179375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.454 [2024-07-15 13:26:08.179391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:11.454 [2024-07-15 13:26:08.179404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:11.454 [2024-07-15 13:26:08.179415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-15 13:26:08.179539] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:11.454 [2024-07-15 13:26:08.179567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:11.454 [2024-07-15 13:26:08.179581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:11.454 [2024-07-15 13:26:08.179620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:11.454 [2024-07-15 13:26:08.179655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:11.454 [2024-07-15 13:26:08.179676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:11.454 [2024-07-15 13:26:08.179686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:11.454 [2024-07-15 13:26:08.179697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:11.454 [2024-07-15 13:26:08.179707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:11.454 [2024-07-15 13:26:08.179718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:11.454 [2024-07-15 13:26:08.179729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:11.454 [2024-07-15 13:26:08.179755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:11.454 [2024-07-15 13:26:08.179787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:11.454 [2024-07-15 13:26:08.179799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.454 [2024-07-15 13:26:08.179809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:11.455 [2024-07-15 13:26:08.179819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:11.455 [2024-07-15 13:26:08.179829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.455 [2024-07-15 13:26:08.179840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:11.455 [2024-07-15 13:26:08.179850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:11.455 [2024-07-15 13:26:08.179860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.455 [2024-07-15 13:26:08.179870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:11.455 [2024-07-15 13:26:08.179880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:11.455 [2024-07-15 13:26:08.179890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.455 [2024-07-15 13:26:08.179900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:11.455 [2024-07-15 13:26:08.179917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:11.455 [2024-07-15 13:26:08.179929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:11.455 [2024-07-15 13:26:08.179939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:11.455 [2024-07-15 13:26:08.179950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:11.455 [2024-07-15 13:26:08.179960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:11.455 [2024-07-15 13:26:08.179970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:11.455 [2024-07-15 13:26:08.179980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:11.455 [2024-07-15 13:26:08.179991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.455 [2024-07-15 13:26:08.180001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:11.455 [2024-07-15 13:26:08.180012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:11.455 [2024-07-15 13:26:08.180022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.455 [2024-07-15 13:26:08.180032] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:11.455 [2024-07-15 13:26:08.180044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:11.455 [2024-07-15 13:26:08.180055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:11.455 [2024-07-15 13:26:08.180065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.455 [2024-07-15 13:26:08.180078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:11.455 [2024-07-15 13:26:08.180092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:11.455 [2024-07-15 13:26:08.180103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:11.455 [2024-07-15 13:26:08.180114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:11.455 [2024-07-15 13:26:08.180125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:11.455 [2024-07-15 13:26:08.180135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:11.455 [2024-07-15 13:26:08.180176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:11.455 [2024-07-15 13:26:08.180196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:11.455 [2024-07-15 13:26:08.180226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:11.455 [2024-07-15 13:26:08.180238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:11.455 [2024-07-15 13:26:08.180250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:11.455 [2024-07-15 13:26:08.180267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:11.455 [2024-07-15 13:26:08.180280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:11.455 [2024-07-15 13:26:08.180292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:11.455 [2024-07-15 13:26:08.180303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:11.455 [2024-07-15 13:26:08.180314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:11.455 [2024-07-15 13:26:08.180330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:11.455 [2024-07-15 13:26:08.180398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:11.455 [2024-07-15 13:26:08.180410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:11.455 [2024-07-15 13:26:08.180435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:11.455 [2024-07-15 13:26:08.180459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:11.455 [2024-07-15 13:26:08.180471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:11.455 [2024-07-15 13:26:08.180483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.455 [2024-07-15 13:26:08.180495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:11.455 [2024-07-15 13:26:08.180517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:27:11.455 [2024-07-15 13:26:08.180546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.203797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.203880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.713 [2024-07-15 13:26:08.203908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.113 ms 00:27:11.713 [2024-07-15 13:26:08.203924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.204095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.204119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:11.713 [2024-07-15 13:26:08.204136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:11.713 [2024-07-15 13:26:08.204208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.217877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.217939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.713 [2024-07-15 13:26:08.217959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.534 ms 00:27:11.713 [2024-07-15 13:26:08.217971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.218072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.218092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:11.713 [2024-07-15 13:26:08.218119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:11.713 [2024-07-15 13:26:08.218137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.218804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.218844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:11.713 [2024-07-15 13:26:08.218860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:27:11.713 [2024-07-15 13:26:08.218871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.219054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.219080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:11.713 [2024-07-15 13:26:08.219092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:27:11.713 [2024-07-15 13:26:08.219103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.226831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.226897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:11.713 [2024-07-15 13:26:08.226916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.691 ms 00:27:11.713 [2024-07-15 13:26:08.226928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.230098] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:11.713 [2024-07-15 13:26:08.230188] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:11.713 [2024-07-15 13:26:08.230211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.230225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:11.713 [2024-07-15 13:26:08.230239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:27:11.713 [2024-07-15 13:26:08.230250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.246287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.246386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:11.713 [2024-07-15 13:26:08.246408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:27:11.713 [2024-07-15 13:26:08.246435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.249549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.249594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:11.713 [2024-07-15 13:26:08.249612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:27:11.713 [2024-07-15 13:26:08.249623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.251281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.251321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:11.713 [2024-07-15 13:26:08.251337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:27:11.713 [2024-07-15 13:26:08.251348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.251808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.251838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:11.713 [2024-07-15 13:26:08.251853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:27:11.713 [2024-07-15 13:26:08.251864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.275624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.275709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:11.713 [2024-07-15 13:26:08.275742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.732 ms 00:27:11.713 [2024-07-15 13:26:08.275755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.285628] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:11.713 [2024-07-15 13:26:08.290177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.290242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:11.713 [2024-07-15 13:26:08.290263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 00:27:11.713 [2024-07-15 13:26:08.290276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.713 [2024-07-15 13:26:08.290424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.713 [2024-07-15 13:26:08.290445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:11.713 [2024-07-15 13:26:08.290459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:11.713 [2024-07-15 13:26:08.290470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.290576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.714 [2024-07-15 13:26:08.290597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:11.714 [2024-07-15 13:26:08.290616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:11.714 [2024-07-15 13:26:08.290632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.290668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.714 [2024-07-15 13:26:08.290683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:11.714 [2024-07-15 13:26:08.290695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:11.714 [2024-07-15 13:26:08.290706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.290751] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:11.714 [2024-07-15 13:26:08.290776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.714 [2024-07-15 13:26:08.290789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:11.714 [2024-07-15 13:26:08.290801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:11.714 [2024-07-15 13:26:08.290831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.295156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.714 [2024-07-15 13:26:08.295203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:11.714 [2024-07-15 13:26:08.295220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:27:11.714 [2024-07-15 13:26:08.295232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.295335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.714 [2024-07-15 13:26:08.295356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:11.714 [2024-07-15 13:26:08.295369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:11.714 [2024-07-15 13:26:08.295380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.714 [2024-07-15 13:26:08.296736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.931 ms, result 0 00:27:51.423  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 128/1024 [MB] (24 MBps) Copying: 153/1024 [MB] (24 MBps) Copying: 179/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (25 MBps) Copying: 230/1024 [MB] (24 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (24 MBps) Copying: 356/1024 [MB] (25 MBps) Copying: 382/1024 [MB] (25 MBps) Copying: 407/1024 [MB] (25 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (25 MBps) Copying: 483/1024 [MB] (24 MBps) Copying: 508/1024 [MB] (24 MBps) Copying: 534/1024 [MB] (25 MBps) Copying: 560/1024 [MB] (25 MBps) Copying: 585/1024 [MB] (25 MBps) Copying: 610/1024 [MB] (25 MBps) Copying: 636/1024 [MB] (25 MBps) Copying: 661/1024 [MB] (25 MBps) Copying: 687/1024 [MB] (26 MBps) Copying: 712/1024 [MB] (25 MBps) Copying: 738/1024 [MB] (25 MBps) Copying: 765/1024 [MB] (26 MBps) Copying: 791/1024 [MB] (25 MBps) Copying: 817/1024 [MB] (26 MBps) Copying: 843/1024 [MB] (26 MBps) Copying: 870/1024 [MB] (26 MBps) Copying: 897/1024 [MB] (27 MBps) Copying: 924/1024 [MB] (26 MBps) Copying: 950/1024 [MB] (25 MBps) Copying: 976/1024 [MB] (26 MBps) Copying: 1003/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 13:26:48.097901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.423 [2024-07-15 13:26:48.097975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:51.423 [2024-07-15 13:26:48.097998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:51.423 [2024-07-15 13:26:48.098028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.423 [2024-07-15 13:26:48.098072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:51.423 [2024-07-15 13:26:48.098928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.423 [2024-07-15 13:26:48.098957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:51.423 [2024-07-15 13:26:48.098972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:27:51.423 [2024-07-15 13:26:48.098984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.423 [2024-07-15 13:26:48.100530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.423 [2024-07-15 13:26:48.100571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:51.423 [2024-07-15 13:26:48.100587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:27:51.423 [2024-07-15 13:26:48.100598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.423 [2024-07-15 13:26:48.100665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.423 [2024-07-15 13:26:48.100685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:51.423 [2024-07-15 13:26:48.100709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:51.423 [2024-07-15 13:26:48.100720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.423 [2024-07-15 13:26:48.100782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.423 [2024-07-15 13:26:48.100799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:51.423 [2024-07-15 13:26:48.100811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:51.423 [2024-07-15 13:26:48.100822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.423 [2024-07-15 13:26:48.100842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:51.423 [2024-07-15 13:26:48.100865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.100999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:51.423 [2024-07-15 13:26:48.101136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.101994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:51.424 [2024-07-15 13:26:48.102112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:51.424 [2024-07-15 13:26:48.102124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:27:51.424 [2024-07-15 13:26:48.102136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:51.424 [2024-07-15 13:26:48.102159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:51.424 [2024-07-15 13:26:48.102171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:51.424 [2024-07-15 13:26:48.102182] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:51.424 [2024-07-15 13:26:48.102193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:51.424 [2024-07-15 13:26:48.102204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:51.424 [2024-07-15 13:26:48.102215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:51.424 [2024-07-15 13:26:48.102225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:51.424 [2024-07-15 13:26:48.102235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:51.424 [2024-07-15 13:26:48.102251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.424 [2024-07-15 13:26:48.102263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:51.424 [2024-07-15 13:26:48.102281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:27:51.424 [2024-07-15 13:26:48.102292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.424 [2024-07-15 13:26:48.104364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.424 [2024-07-15 13:26:48.104397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:51.424 [2024-07-15 13:26:48.104412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:27:51.424 [2024-07-15 13:26:48.104423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.424 [2024-07-15 13:26:48.104553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.424 [2024-07-15 13:26:48.104587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:51.424 [2024-07-15 13:26:48.104601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:51.425 [2024-07-15 13:26:48.104622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.111580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.111642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.425 [2024-07-15 13:26:48.111660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.111672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.111755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.111780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.425 [2024-07-15 13:26:48.111793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.111803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.111883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.111903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.425 [2024-07-15 13:26:48.111916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.111945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.111976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.111994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.425 [2024-07-15 13:26:48.112011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.112030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.128502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.128570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.425 [2024-07-15 13:26:48.128600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.128612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.138656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.138743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.425 [2024-07-15 13:26:48.138762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.138774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.138861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.138879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.425 [2024-07-15 13:26:48.138892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.138903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.138969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.138984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.425 [2024-07-15 13:26:48.138997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.139015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.139089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.139108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.425 [2024-07-15 13:26:48.139120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.139131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.139191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.139210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:51.425 [2024-07-15 13:26:48.139222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.139234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.139286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.139302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.425 [2024-07-15 13:26:48.139326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.139336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.139391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.425 [2024-07-15 13:26:48.139408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.425 [2024-07-15 13:26:48.139420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.425 [2024-07-15 13:26:48.139448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.425 [2024-07-15 13:26:48.139613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.668 ms, result 0 00:27:51.999 00:27:51.999 00:27:51.999 13:26:48 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:51.999 [2024-07-15 13:26:48.575863] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:51.999 [2024-07-15 13:26:48.576093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96108 ] 00:27:51.999 [2024-07-15 13:26:48.729654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.257 [2024-07-15 13:26:48.827023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.257 [2024-07-15 13:26:48.954436] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:52.257 [2024-07-15 13:26:48.954543] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:52.516 [2024-07-15 13:26:49.108828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.108908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:52.516 [2024-07-15 13:26:49.108931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:52.516 [2024-07-15 13:26:49.108956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.109063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.109085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:52.516 [2024-07-15 13:26:49.109099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:52.516 [2024-07-15 13:26:49.109116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.109165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:52.516 [2024-07-15 13:26:49.109565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:52.516 [2024-07-15 13:26:49.109597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.109617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:52.516 [2024-07-15 13:26:49.109629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:27:52.516 [2024-07-15 13:26:49.109648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.110137] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:52.516 [2024-07-15 13:26:49.110179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.110211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:52.516 [2024-07-15 13:26:49.110226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:52.516 [2024-07-15 13:26:49.110243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.110319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.110338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:52.516 [2024-07-15 13:26:49.110351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:52.516 [2024-07-15 13:26:49.110374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.110777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.110797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:52.516 [2024-07-15 13:26:49.110811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:27:52.516 [2024-07-15 13:26:49.110828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.110937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.110957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:52.516 [2024-07-15 13:26:49.110970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:52.516 [2024-07-15 13:26:49.110982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.111031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.111051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:52.516 [2024-07-15 13:26:49.111063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:52.516 [2024-07-15 13:26:49.111080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.111116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:52.516 [2024-07-15 13:26:49.113331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.113364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:52.516 [2024-07-15 13:26:49.113379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:27:52.516 [2024-07-15 13:26:49.113391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.113436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.113465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:52.516 [2024-07-15 13:26:49.113478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:52.516 [2024-07-15 13:26:49.113500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.113556] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:52.516 [2024-07-15 13:26:49.113589] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:52.516 [2024-07-15 13:26:49.113647] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:52.516 [2024-07-15 13:26:49.113683] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:52.516 [2024-07-15 13:26:49.113789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:52.516 [2024-07-15 13:26:49.113807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:52.516 [2024-07-15 13:26:49.113834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:52.516 [2024-07-15 13:26:49.113854] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:52.516 [2024-07-15 13:26:49.113869] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:52.516 [2024-07-15 13:26:49.113881] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:52.516 [2024-07-15 13:26:49.113893] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:52.516 [2024-07-15 13:26:49.113904] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:52.516 [2024-07-15 13:26:49.113915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:52.516 [2024-07-15 13:26:49.113932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.113945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:52.516 [2024-07-15 13:26:49.113968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:27:52.516 [2024-07-15 13:26:49.113980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.114089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.516 [2024-07-15 13:26:49.114112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:52.516 [2024-07-15 13:26:49.114125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:52.516 [2024-07-15 13:26:49.114137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.516 [2024-07-15 13:26:49.114260] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:52.516 [2024-07-15 13:26:49.114277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:52.516 [2024-07-15 13:26:49.114304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:52.516 [2024-07-15 13:26:49.114347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:52.516 [2024-07-15 13:26:49.114384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:52.516 [2024-07-15 13:26:49.114407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:52.516 [2024-07-15 13:26:49.114417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:52.516 [2024-07-15 13:26:49.114428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:52.516 [2024-07-15 13:26:49.114438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:52.516 [2024-07-15 13:26:49.114449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:52.516 [2024-07-15 13:26:49.114460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:52.516 [2024-07-15 13:26:49.114483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:52.516 [2024-07-15 13:26:49.114517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:52.516 [2024-07-15 13:26:49.114550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:52.516 [2024-07-15 13:26:49.114585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:52.516 [2024-07-15 13:26:49.114617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.516 [2024-07-15 13:26:49.114639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:52.516 [2024-07-15 13:26:49.114650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:52.516 [2024-07-15 13:26:49.114671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:52.516 [2024-07-15 13:26:49.114682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:52.516 [2024-07-15 13:26:49.114692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:52.516 [2024-07-15 13:26:49.114703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:52.516 [2024-07-15 13:26:49.114714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:52.516 [2024-07-15 13:26:49.114724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.516 [2024-07-15 13:26:49.114741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:52.516 [2024-07-15 13:26:49.114753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:52.516 [2024-07-15 13:26:49.114763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.517 [2024-07-15 13:26:49.114774] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:52.517 [2024-07-15 13:26:49.114795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:52.517 [2024-07-15 13:26:49.114810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:52.517 [2024-07-15 13:26:49.114822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.517 [2024-07-15 13:26:49.114834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:52.517 [2024-07-15 13:26:49.114845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:52.517 [2024-07-15 13:26:49.114858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:52.517 [2024-07-15 13:26:49.114869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:52.517 [2024-07-15 13:26:49.114880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:52.517 [2024-07-15 13:26:49.114891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:52.517 [2024-07-15 13:26:49.114905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:52.517 [2024-07-15 13:26:49.114920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.114942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:52.517 [2024-07-15 13:26:49.114958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:52.517 [2024-07-15 13:26:49.114970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:52.517 [2024-07-15 13:26:49.114982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:52.517 [2024-07-15 13:26:49.114994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:52.517 [2024-07-15 13:26:49.115005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:52.517 [2024-07-15 13:26:49.115017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:52.517 [2024-07-15 13:26:49.115029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:52.517 [2024-07-15 13:26:49.115040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:52.517 [2024-07-15 13:26:49.115051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:52.517 [2024-07-15 13:26:49.115108] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:52.517 [2024-07-15 13:26:49.115130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:52.517 [2024-07-15 13:26:49.115175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:52.517 [2024-07-15 13:26:49.115188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:52.517 [2024-07-15 13:26:49.115199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:52.517 [2024-07-15 13:26:49.115224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.115237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:52.517 [2024-07-15 13:26:49.115250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:27:52.517 [2024-07-15 13:26:49.115261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.136624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.136691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:52.517 [2024-07-15 13:26:49.136715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.276 ms 00:27:52.517 [2024-07-15 13:26:49.136728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.136864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.136881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:52.517 [2024-07-15 13:26:49.136894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:52.517 [2024-07-15 13:26:49.136907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.149532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.149608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:52.517 [2024-07-15 13:26:49.149628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.527 ms 00:27:52.517 [2024-07-15 13:26:49.149642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.149721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.149745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:52.517 [2024-07-15 13:26:49.149759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:52.517 [2024-07-15 13:26:49.149770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.149942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.149961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:52.517 [2024-07-15 13:26:49.149975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:52.517 [2024-07-15 13:26:49.149986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.150180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.150206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:52.517 [2024-07-15 13:26:49.150220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:27:52.517 [2024-07-15 13:26:49.150232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.157790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.157857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:52.517 [2024-07-15 13:26:49.157876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.525 ms 00:27:52.517 [2024-07-15 13:26:49.157889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.158125] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:52.517 [2024-07-15 13:26:49.158177] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:52.517 [2024-07-15 13:26:49.158195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.158208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:52.517 [2024-07-15 13:26:49.158228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:52.517 [2024-07-15 13:26:49.158239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.171757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.171825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:52.517 [2024-07-15 13:26:49.171845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.480 ms 00:27:52.517 [2024-07-15 13:26:49.171858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.172036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.172055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:52.517 [2024-07-15 13:26:49.172068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:27:52.517 [2024-07-15 13:26:49.172080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.172194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.172224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:52.517 [2024-07-15 13:26:49.172238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:52.517 [2024-07-15 13:26:49.172250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.172702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.172721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:52.517 [2024-07-15 13:26:49.172735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:27:52.517 [2024-07-15 13:26:49.172761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.172792] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:52.517 [2024-07-15 13:26:49.172808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.172827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:52.517 [2024-07-15 13:26:49.172840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:52.517 [2024-07-15 13:26:49.172851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.183396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:52.517 [2024-07-15 13:26:49.183677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.183698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:52.517 [2024-07-15 13:26:49.183715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.797 ms 00:27:52.517 [2024-07-15 13:26:49.183733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.186343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.186377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:52.517 [2024-07-15 13:26:49.186412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:27:52.517 [2024-07-15 13:26:49.186424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.186597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.186619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:52.517 [2024-07-15 13:26:49.186633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:52.517 [2024-07-15 13:26:49.186652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.186689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.186704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:52.517 [2024-07-15 13:26:49.186717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:52.517 [2024-07-15 13:26:49.186729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.186774] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:52.517 [2024-07-15 13:26:49.186791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.186804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:52.517 [2024-07-15 13:26:49.186816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:52.517 [2024-07-15 13:26:49.186827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.191485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.191546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:52.517 [2024-07-15 13:26:49.191579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:27:52.517 [2024-07-15 13:26:49.191592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.191680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.517 [2024-07-15 13:26:49.191699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:52.517 [2024-07-15 13:26:49.191713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:52.517 [2024-07-15 13:26:49.191724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.517 [2024-07-15 13:26:49.193120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.776 ms, result 0 00:28:32.318  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (26 MBps) Copying: 134/1024 [MB] (26 MBps) Copying: 161/1024 [MB] (26 MBps) Copying: 187/1024 [MB] (26 MBps) Copying: 214/1024 [MB] (26 MBps) Copying: 240/1024 [MB] (25 MBps) Copying: 265/1024 [MB] (25 MBps) Copying: 291/1024 [MB] (26 MBps) Copying: 318/1024 [MB] (26 MBps) Copying: 343/1024 [MB] (25 MBps) Copying: 370/1024 [MB] (26 MBps) Copying: 396/1024 [MB] (26 MBps) Copying: 422/1024 [MB] (26 MBps) Copying: 448/1024 [MB] (25 MBps) Copying: 474/1024 [MB] (26 MBps) Copying: 501/1024 [MB] (26 MBps) Copying: 527/1024 [MB] (26 MBps) Copying: 553/1024 [MB] (26 MBps) Copying: 579/1024 [MB] (26 MBps) Copying: 605/1024 [MB] (25 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 656/1024 [MB] (26 MBps) Copying: 682/1024 [MB] (25 MBps) Copying: 708/1024 [MB] (25 MBps) Copying: 734/1024 [MB] (26 MBps) Copying: 761/1024 [MB] (26 MBps) Copying: 786/1024 [MB] (25 MBps) Copying: 810/1024 [MB] (24 MBps) Copying: 837/1024 [MB] (26 MBps) Copying: 861/1024 [MB] (24 MBps) Copying: 887/1024 [MB] (25 MBps) Copying: 913/1024 [MB] (26 MBps) Copying: 937/1024 [MB] (24 MBps) Copying: 964/1024 [MB] (26 MBps) Copying: 989/1024 [MB] (25 MBps) Copying: 1015/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 13:27:28.906187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.318 [2024-07-15 13:27:28.906323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.318 [2024-07-15 13:27:28.906368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.318 [2024-07-15 13:27:28.906398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.318 [2024-07-15 13:27:28.906475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.318 [2024-07-15 13:27:28.907770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.318 [2024-07-15 13:27:28.907832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.318 [2024-07-15 13:27:28.907865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:28:32.318 [2024-07-15 13:27:28.907897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.318 [2024-07-15 13:27:28.908344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.318 [2024-07-15 13:27:28.908411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.318 [2024-07-15 13:27:28.908451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:28:32.318 [2024-07-15 13:27:28.908488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.318 [2024-07-15 13:27:28.908597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.318 [2024-07-15 13:27:28.908645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:32.318 [2024-07-15 13:27:28.908685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:32.318 [2024-07-15 13:27:28.908720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.318 [2024-07-15 13:27:28.908865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.318 [2024-07-15 13:27:28.908913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:32.318 [2024-07-15 13:27:28.908953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:32.318 [2024-07-15 13:27:28.908990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.318 [2024-07-15 13:27:28.909051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.318 [2024-07-15 13:27:28.909100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.909998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.910969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.318 [2024-07-15 13:27:28.911793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.911830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.911867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.911903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.911941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.911979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.912975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.913011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.319 [2024-07-15 13:27:28.913064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.319 [2024-07-15 13:27:28.913135] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:28:32.319 [2024-07-15 13:27:28.913224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:32.319 [2024-07-15 13:27:28.913260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:32.319 [2024-07-15 13:27:28.913295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:32.319 [2024-07-15 13:27:28.913329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:32.319 [2024-07-15 13:27:28.913363] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.319 [2024-07-15 13:27:28.913399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.319 [2024-07-15 13:27:28.913434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.319 [2024-07-15 13:27:28.913465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.319 [2024-07-15 13:27:28.913498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.319 [2024-07-15 13:27:28.913535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.319 [2024-07-15 13:27:28.913574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.319 [2024-07-15 13:27:28.913610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:28:32.319 [2024-07-15 13:27:28.913661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.916486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.319 [2024-07-15 13:27:28.916549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.319 [2024-07-15 13:27:28.916591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:28:32.319 [2024-07-15 13:27:28.916626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.916862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.319 [2024-07-15 13:27:28.916922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.319 [2024-07-15 13:27:28.916978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:28:32.319 [2024-07-15 13:27:28.917015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.924549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.924614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.319 [2024-07-15 13:27:28.924642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.924660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.924771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.924811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.319 [2024-07-15 13:27:28.924850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.924872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.924964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.924997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.319 [2024-07-15 13:27:28.925021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.925043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.925079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.925100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.319 [2024-07-15 13:27:28.925122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.925176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.941138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.941274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.319 [2024-07-15 13:27:28.941305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.941324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.952521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.952597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.319 [2024-07-15 13:27:28.952640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.952675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.952792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.952819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.319 [2024-07-15 13:27:28.952839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.952858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.952934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.952962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.319 [2024-07-15 13:27:28.952986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.953007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.953122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.953207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.319 [2024-07-15 13:27:28.953240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.953263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.953350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.953387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.319 [2024-07-15 13:27:28.953421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.953456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.953523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.953544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.319 [2024-07-15 13:27:28.953573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.953607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.953722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.319 [2024-07-15 13:27:28.953755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.319 [2024-07-15 13:27:28.953780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.319 [2024-07-15 13:27:28.953802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.319 [2024-07-15 13:27:28.954044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.829 ms, result 0 00:28:32.578 00:28:32.578 00:28:32.578 13:27:29 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:35.133 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:35.133 13:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:35.133 [2024-07-15 13:27:31.527334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:35.133 [2024-07-15 13:27:31.528062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96525 ] 00:28:35.133 [2024-07-15 13:27:31.684258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.133 [2024-07-15 13:27:31.786062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.393 [2024-07-15 13:27:31.912208] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.393 [2024-07-15 13:27:31.912292] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.393 [2024-07-15 13:27:32.065642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.065720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.393 [2024-07-15 13:27:32.065744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:35.393 [2024-07-15 13:27:32.065756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.065830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.065849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.393 [2024-07-15 13:27:32.065862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:35.393 [2024-07-15 13:27:32.065880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.065912] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.393 [2024-07-15 13:27:32.066256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.393 [2024-07-15 13:27:32.066284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.066308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.393 [2024-07-15 13:27:32.066321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:28:35.393 [2024-07-15 13:27:32.066333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.066754] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:35.393 [2024-07-15 13:27:32.066780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.066806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.393 [2024-07-15 13:27:32.066819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:35.393 [2024-07-15 13:27:32.066841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.066905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.066922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.393 [2024-07-15 13:27:32.066935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:35.393 [2024-07-15 13:27:32.066945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.067456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.067480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.393 [2024-07-15 13:27:32.067500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:28:35.393 [2024-07-15 13:27:32.067511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.067618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.067637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.393 [2024-07-15 13:27:32.067650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:35.393 [2024-07-15 13:27:32.067661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.067701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.067718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:35.393 [2024-07-15 13:27:32.067736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:35.393 [2024-07-15 13:27:32.067747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.067781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:35.393 [2024-07-15 13:27:32.069958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.069993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.393 [2024-07-15 13:27:32.070009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:28:35.393 [2024-07-15 13:27:32.070032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.070076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.070097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:35.393 [2024-07-15 13:27:32.070111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:35.393 [2024-07-15 13:27:32.070125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.070186] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:35.393 [2024-07-15 13:27:32.070219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:35.393 [2024-07-15 13:27:32.070268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:35.393 [2024-07-15 13:27:32.070292] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:35.393 [2024-07-15 13:27:32.070402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:35.393 [2024-07-15 13:27:32.070418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:35.393 [2024-07-15 13:27:32.070438] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:35.393 [2024-07-15 13:27:32.070453] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:35.393 [2024-07-15 13:27:32.070465] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:35.393 [2024-07-15 13:27:32.070477] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:35.393 [2024-07-15 13:27:32.070497] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:35.393 [2024-07-15 13:27:32.070515] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:35.393 [2024-07-15 13:27:32.070526] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:35.393 [2024-07-15 13:27:32.070541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.070553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:35.393 [2024-07-15 13:27:32.070565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:28:35.393 [2024-07-15 13:27:32.070575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.070676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.393 [2024-07-15 13:27:32.070692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:35.393 [2024-07-15 13:27:32.070703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:35.393 [2024-07-15 13:27:32.070724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.393 [2024-07-15 13:27:32.070838] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:35.393 [2024-07-15 13:27:32.070857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:35.393 [2024-07-15 13:27:32.070870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.393 [2024-07-15 13:27:32.070881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.393 [2024-07-15 13:27:32.070892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:35.393 [2024-07-15 13:27:32.070902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:35.393 [2024-07-15 13:27:32.070912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:35.393 [2024-07-15 13:27:32.070923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:35.393 [2024-07-15 13:27:32.070940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:35.393 [2024-07-15 13:27:32.070951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.393 [2024-07-15 13:27:32.070961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:35.393 [2024-07-15 13:27:32.070971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:35.393 [2024-07-15 13:27:32.070981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.393 [2024-07-15 13:27:32.070991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:35.393 [2024-07-15 13:27:32.071001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:35.393 [2024-07-15 13:27:32.071011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.393 [2024-07-15 13:27:32.071024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:35.393 [2024-07-15 13:27:32.071035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:35.393 [2024-07-15 13:27:32.071045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.393 [2024-07-15 13:27:32.071056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:35.393 [2024-07-15 13:27:32.071066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:35.394 [2024-07-15 13:27:32.071096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:35.394 [2024-07-15 13:27:32.071130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:35.394 [2024-07-15 13:27:32.071184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:35.394 [2024-07-15 13:27:32.071214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.394 [2024-07-15 13:27:32.071235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:35.394 [2024-07-15 13:27:32.071245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:35.394 [2024-07-15 13:27:32.071255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.394 [2024-07-15 13:27:32.071265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:35.394 [2024-07-15 13:27:32.071275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:35.394 [2024-07-15 13:27:32.071285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:35.394 [2024-07-15 13:27:32.071313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:35.394 [2024-07-15 13:27:32.071323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071333] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:35.394 [2024-07-15 13:27:32.071349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:35.394 [2024-07-15 13:27:32.071360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.394 [2024-07-15 13:27:32.071381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:35.394 [2024-07-15 13:27:32.071393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:35.394 [2024-07-15 13:27:32.071403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:35.394 [2024-07-15 13:27:32.071413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:35.394 [2024-07-15 13:27:32.071424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:35.394 [2024-07-15 13:27:32.071434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:35.394 [2024-07-15 13:27:32.071446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:35.394 [2024-07-15 13:27:32.071459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:35.394 [2024-07-15 13:27:32.071486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:35.394 [2024-07-15 13:27:32.071498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:35.394 [2024-07-15 13:27:32.071510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:35.394 [2024-07-15 13:27:32.071521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:35.394 [2024-07-15 13:27:32.071532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:35.394 [2024-07-15 13:27:32.071543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:35.394 [2024-07-15 13:27:32.071554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:35.394 [2024-07-15 13:27:32.071565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:35.394 [2024-07-15 13:27:32.071576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:35.394 [2024-07-15 13:27:32.071631] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:35.394 [2024-07-15 13:27:32.071654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.394 [2024-07-15 13:27:32.071680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:35.394 [2024-07-15 13:27:32.071692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:35.394 [2024-07-15 13:27:32.071704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:35.394 [2024-07-15 13:27:32.071726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.071737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:35.394 [2024-07-15 13:27:32.071749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:28:35.394 [2024-07-15 13:27:32.071760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.092589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.092660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.394 [2024-07-15 13:27:32.092683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.749 ms 00:28:35.394 [2024-07-15 13:27:32.092696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.092833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.092849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.394 [2024-07-15 13:27:32.092877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:35.394 [2024-07-15 13:27:32.092888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.107248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.107316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.394 [2024-07-15 13:27:32.107338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.259 ms 00:28:35.394 [2024-07-15 13:27:32.107351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.107427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.107451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.394 [2024-07-15 13:27:32.107465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:35.394 [2024-07-15 13:27:32.107476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.107642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.107662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.394 [2024-07-15 13:27:32.107675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:35.394 [2024-07-15 13:27:32.107687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.107850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.107869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.394 [2024-07-15 13:27:32.107881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:28:35.394 [2024-07-15 13:27:32.107892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.115434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.115492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.394 [2024-07-15 13:27:32.115510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:28:35.394 [2024-07-15 13:27:32.115522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.115704] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:35.394 [2024-07-15 13:27:32.115728] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:35.394 [2024-07-15 13:27:32.115743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.115755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:35.394 [2024-07-15 13:27:32.115772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:35.394 [2024-07-15 13:27:32.115783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.129126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.129191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.394 [2024-07-15 13:27:32.129210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.317 ms 00:28:35.394 [2024-07-15 13:27:32.129222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.129389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.129406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.394 [2024-07-15 13:27:32.129419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:35.394 [2024-07-15 13:27:32.129442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.129529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.129548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.394 [2024-07-15 13:27:32.129561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:35.394 [2024-07-15 13:27:32.129572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.394 [2024-07-15 13:27:32.130032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.394 [2024-07-15 13:27:32.130051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.394 [2024-07-15 13:27:32.130065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:28:35.395 [2024-07-15 13:27:32.130087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.130110] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:35.653 [2024-07-15 13:27:32.130164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.130181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.653 [2024-07-15 13:27:32.130194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:35.653 [2024-07-15 13:27:32.130205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.139844] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.653 [2024-07-15 13:27:32.140077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.140097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.653 [2024-07-15 13:27:32.140116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.843 ms 00:28:35.653 [2024-07-15 13:27:32.140128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.142684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.142720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.653 [2024-07-15 13:27:32.142736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:28:35.653 [2024-07-15 13:27:32.142747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.142884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.142903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.653 [2024-07-15 13:27:32.142920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:35.653 [2024-07-15 13:27:32.142932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.142965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.142990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.653 [2024-07-15 13:27:32.143003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:35.653 [2024-07-15 13:27:32.143014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.143057] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.653 [2024-07-15 13:27:32.143075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.143086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.653 [2024-07-15 13:27:32.143098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:35.653 [2024-07-15 13:27:32.143113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.147649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.147703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.653 [2024-07-15 13:27:32.147721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:28:35.653 [2024-07-15 13:27:32.147733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.147818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.653 [2024-07-15 13:27:32.147838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.653 [2024-07-15 13:27:32.147851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:35.653 [2024-07-15 13:27:32.147863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.653 [2024-07-15 13:27:32.149174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.035 ms, result 0 00:29:16.699  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 77/1024 [MB] (26 MBps) Copying: 103/1024 [MB] (26 MBps) Copying: 129/1024 [MB] (25 MBps) Copying: 155/1024 [MB] (26 MBps) Copying: 181/1024 [MB] (26 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 232/1024 [MB] (24 MBps) Copying: 256/1024 [MB] (24 MBps) Copying: 282/1024 [MB] (25 MBps) Copying: 309/1024 [MB] (27 MBps) Copying: 333/1024 [MB] (23 MBps) Copying: 359/1024 [MB] (26 MBps) Copying: 385/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (26 MBps) Copying: 438/1024 [MB] (26 MBps) Copying: 464/1024 [MB] (26 MBps) Copying: 490/1024 [MB] (25 MBps) Copying: 515/1024 [MB] (25 MBps) Copying: 541/1024 [MB] (26 MBps) Copying: 568/1024 [MB] (26 MBps) Copying: 594/1024 [MB] (26 MBps) Copying: 621/1024 [MB] (26 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 671/1024 [MB] (24 MBps) Copying: 697/1024 [MB] (26 MBps) Copying: 723/1024 [MB] (25 MBps) Copying: 750/1024 [MB] (26 MBps) Copying: 776/1024 [MB] (26 MBps) Copying: 801/1024 [MB] (25 MBps) Copying: 827/1024 [MB] (25 MBps) Copying: 853/1024 [MB] (25 MBps) Copying: 877/1024 [MB] (24 MBps) Copying: 902/1024 [MB] (24 MBps) Copying: 926/1024 [MB] (24 MBps) Copying: 950/1024 [MB] (23 MBps) Copying: 974/1024 [MB] (23 MBps) Copying: 999/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (23 MBps) Copying: 1048412/1048576 [kB] (832 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 13:28:13.373669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.699 [2024-07-15 13:28:13.373770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:16.699 [2024-07-15 13:28:13.373794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.699 [2024-07-15 13:28:13.373808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.699 [2024-07-15 13:28:13.377029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:16.699 [2024-07-15 13:28:13.379454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.699 [2024-07-15 13:28:13.379515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:16.699 [2024-07-15 13:28:13.379536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:29:16.699 [2024-07-15 13:28:13.379557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.699 [2024-07-15 13:28:13.390155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.699 [2024-07-15 13:28:13.390224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:16.699 [2024-07-15 13:28:13.390256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.089 ms 00:29:16.699 [2024-07-15 13:28:13.390269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.699 [2024-07-15 13:28:13.390310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.699 [2024-07-15 13:28:13.390325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:16.699 [2024-07-15 13:28:13.390338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.699 [2024-07-15 13:28:13.390350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.699 [2024-07-15 13:28:13.390424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.699 [2024-07-15 13:28:13.390441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:16.699 [2024-07-15 13:28:13.390454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:16.699 [2024-07-15 13:28:13.390465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.699 [2024-07-15 13:28:13.390486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:16.699 [2024-07-15 13:28:13.390503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:29:16.699 [2024-07-15 13:28:13.390543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.390989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.391013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.391038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:16.699 [2024-07-15 13:28:13.391060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.391995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:16.700 [2024-07-15 13:28:13.392340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:16.701 [2024-07-15 13:28:13.392583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:16.701 [2024-07-15 13:28:13.392658] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:29:16.701 [2024-07-15 13:28:13.392683] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:29:16.701 [2024-07-15 13:28:13.392703] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130336 00:29:16.701 [2024-07-15 13:28:13.392738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:29:16.701 [2024-07-15 13:28:13.392766] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:29:16.701 [2024-07-15 13:28:13.392786] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:16.701 [2024-07-15 13:28:13.392808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:16.701 [2024-07-15 13:28:13.392829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:16.701 [2024-07-15 13:28:13.392852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:16.701 [2024-07-15 13:28:13.392871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:16.701 [2024-07-15 13:28:13.392893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.701 [2024-07-15 13:28:13.392909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:16.701 [2024-07-15 13:28:13.392943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:29:16.701 [2024-07-15 13:28:13.392963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.395292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.701 [2024-07-15 13:28:13.395333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:16.701 [2024-07-15 13:28:13.395349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:29:16.701 [2024-07-15 13:28:13.395361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.395528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.701 [2024-07-15 13:28:13.395545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:16.701 [2024-07-15 13:28:13.395558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:29:16.701 [2024-07-15 13:28:13.395569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.403108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.403189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.701 [2024-07-15 13:28:13.403209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.403221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.403314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.403330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.701 [2024-07-15 13:28:13.403343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.403354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.403436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.403483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.701 [2024-07-15 13:28:13.403502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.403537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.403586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.403615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.701 [2024-07-15 13:28:13.403636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.403671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.420125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.420244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.701 [2024-07-15 13:28:13.420265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.420291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.432104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.432214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.701 [2024-07-15 13:28:13.432235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.432248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.432333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.432350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.701 [2024-07-15 13:28:13.432373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.432384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.432434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.432449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.701 [2024-07-15 13:28:13.432467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.432486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.432776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.432821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.701 [2024-07-15 13:28:13.432859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.701 [2024-07-15 13:28:13.432883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.701 [2024-07-15 13:28:13.432954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.701 [2024-07-15 13:28:13.432984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:16.702 [2024-07-15 13:28:13.433008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.702 [2024-07-15 13:28:13.433027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.702 [2024-07-15 13:28:13.433092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.702 [2024-07-15 13:28:13.433121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.702 [2024-07-15 13:28:13.433159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.702 [2024-07-15 13:28:13.433194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.702 [2024-07-15 13:28:13.433276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.702 [2024-07-15 13:28:13.433308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.702 [2024-07-15 13:28:13.433333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.702 [2024-07-15 13:28:13.433351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.702 [2024-07-15 13:28:13.433626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 62.482 ms, result 0 00:29:17.643 00:29:17.643 00:29:17.643 13:28:14 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:17.643 [2024-07-15 13:28:14.264289] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:17.643 [2024-07-15 13:28:14.264460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96942 ] 00:29:17.901 [2024-07-15 13:28:14.410023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.901 [2024-07-15 13:28:14.508833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.901 [2024-07-15 13:28:14.638810] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.901 [2024-07-15 13:28:14.638926] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:18.161 [2024-07-15 13:28:14.793152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.161 [2024-07-15 13:28:14.793242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:18.161 [2024-07-15 13:28:14.793281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:18.161 [2024-07-15 13:28:14.793298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.161 [2024-07-15 13:28:14.793377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.161 [2024-07-15 13:28:14.793399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:18.161 [2024-07-15 13:28:14.793413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:18.161 [2024-07-15 13:28:14.793430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.161 [2024-07-15 13:28:14.793464] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:18.161 [2024-07-15 13:28:14.793826] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:18.161 [2024-07-15 13:28:14.793885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.161 [2024-07-15 13:28:14.793918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:18.161 [2024-07-15 13:28:14.793947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:29:18.161 [2024-07-15 13:28:14.793959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.161 [2024-07-15 13:28:14.794675] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:18.161 [2024-07-15 13:28:14.794713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.161 [2024-07-15 13:28:14.794742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:18.161 [2024-07-15 13:28:14.794756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:18.161 [2024-07-15 13:28:14.794772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.161 [2024-07-15 13:28:14.794837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.161 [2024-07-15 13:28:14.794855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:18.161 [2024-07-15 13:28:14.794868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:18.162 [2024-07-15 13:28:14.794879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.795469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.795509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:18.162 [2024-07-15 13:28:14.795526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:29:18.162 [2024-07-15 13:28:14.795543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.795664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.795688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:18.162 [2024-07-15 13:28:14.795701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:18.162 [2024-07-15 13:28:14.795713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.795759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.795786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:18.162 [2024-07-15 13:28:14.795803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:18.162 [2024-07-15 13:28:14.795821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.795878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:18.162 [2024-07-15 13:28:14.798238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.798278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:18.162 [2024-07-15 13:28:14.798294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:29:18.162 [2024-07-15 13:28:14.798306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.798358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.798377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:18.162 [2024-07-15 13:28:14.798389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:18.162 [2024-07-15 13:28:14.798407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.798457] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:18.162 [2024-07-15 13:28:14.798494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:18.162 [2024-07-15 13:28:14.798570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:18.162 [2024-07-15 13:28:14.798601] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:18.162 [2024-07-15 13:28:14.798738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:18.162 [2024-07-15 13:28:14.798784] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:18.162 [2024-07-15 13:28:14.798811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:18.162 [2024-07-15 13:28:14.798849] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:18.162 [2024-07-15 13:28:14.798877] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:18.162 [2024-07-15 13:28:14.798916] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:18.162 [2024-07-15 13:28:14.798934] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:18.162 [2024-07-15 13:28:14.798947] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:18.162 [2024-07-15 13:28:14.798967] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:18.162 [2024-07-15 13:28:14.798989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.799012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:18.162 [2024-07-15 13:28:14.799039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:29:18.162 [2024-07-15 13:28:14.799060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.799214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.162 [2024-07-15 13:28:14.799270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:18.162 [2024-07-15 13:28:14.799296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:18.162 [2024-07-15 13:28:14.799318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.162 [2024-07-15 13:28:14.799475] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:18.162 [2024-07-15 13:28:14.799510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:18.162 [2024-07-15 13:28:14.799531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:18.162 [2024-07-15 13:28:14.799548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:18.162 [2024-07-15 13:28:14.799609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:18.162 [2024-07-15 13:28:14.799648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:18.162 [2024-07-15 13:28:14.799664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:18.162 [2024-07-15 13:28:14.799707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:18.162 [2024-07-15 13:28:14.799728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:18.162 [2024-07-15 13:28:14.799749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:18.162 [2024-07-15 13:28:14.799771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:18.162 [2024-07-15 13:28:14.799791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:18.162 [2024-07-15 13:28:14.799811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:18.162 [2024-07-15 13:28:14.799843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:18.162 [2024-07-15 13:28:14.799857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:18.162 [2024-07-15 13:28:14.799894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.162 [2024-07-15 13:28:14.799936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:18.162 [2024-07-15 13:28:14.799957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:18.162 [2024-07-15 13:28:14.799979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.162 [2024-07-15 13:28:14.800004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:18.162 [2024-07-15 13:28:14.800026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.162 [2024-07-15 13:28:14.800048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:18.162 [2024-07-15 13:28:14.800059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.162 [2024-07-15 13:28:14.800097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:18.162 [2024-07-15 13:28:14.800119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:18.162 [2024-07-15 13:28:14.800185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:18.162 [2024-07-15 13:28:14.800207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:18.162 [2024-07-15 13:28:14.800227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:18.162 [2024-07-15 13:28:14.800246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:18.162 [2024-07-15 13:28:14.800266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:18.162 [2024-07-15 13:28:14.800284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:18.162 [2024-07-15 13:28:14.800332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:18.162 [2024-07-15 13:28:14.800356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800375] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:18.162 [2024-07-15 13:28:14.800388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:18.162 [2024-07-15 13:28:14.800414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:18.162 [2024-07-15 13:28:14.800430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.162 [2024-07-15 13:28:14.800450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:18.162 [2024-07-15 13:28:14.800471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:18.162 [2024-07-15 13:28:14.800492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:18.162 [2024-07-15 13:28:14.800512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:18.162 [2024-07-15 13:28:14.800542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:18.162 [2024-07-15 13:28:14.800564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:18.162 [2024-07-15 13:28:14.800585] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:18.162 [2024-07-15 13:28:14.800602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:18.162 [2024-07-15 13:28:14.800625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:18.162 [2024-07-15 13:28:14.800647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:18.162 [2024-07-15 13:28:14.800679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:18.162 [2024-07-15 13:28:14.800704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:18.162 [2024-07-15 13:28:14.800726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:18.162 [2024-07-15 13:28:14.800757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:18.162 [2024-07-15 13:28:14.800778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:18.162 [2024-07-15 13:28:14.800800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:18.162 [2024-07-15 13:28:14.800820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:18.163 [2024-07-15 13:28:14.800840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.800862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.800876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.800889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.800910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:18.163 [2024-07-15 13:28:14.800932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:18.163 [2024-07-15 13:28:14.800975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.801000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:18.163 [2024-07-15 13:28:14.801024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:18.163 [2024-07-15 13:28:14.801054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:18.163 [2024-07-15 13:28:14.801078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:18.163 [2024-07-15 13:28:14.801120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.801158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:18.163 [2024-07-15 13:28:14.801179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:29:18.163 [2024-07-15 13:28:14.801192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.823716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.823788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:18.163 [2024-07-15 13:28:14.823842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:29:18.163 [2024-07-15 13:28:14.823859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.824034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.824057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:18.163 [2024-07-15 13:28:14.824074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:18.163 [2024-07-15 13:28:14.824090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.837610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.837667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:18.163 [2024-07-15 13:28:14.837703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.342 ms 00:29:18.163 [2024-07-15 13:28:14.837716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.837791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.837812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:18.163 [2024-07-15 13:28:14.837825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:18.163 [2024-07-15 13:28:14.837844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.838020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.838064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:18.163 [2024-07-15 13:28:14.838078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:18.163 [2024-07-15 13:28:14.838090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.838281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.838310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:18.163 [2024-07-15 13:28:14.838332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:29:18.163 [2024-07-15 13:28:14.838351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.846459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.846510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:18.163 [2024-07-15 13:28:14.846545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.064 ms 00:29:18.163 [2024-07-15 13:28:14.846557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.846739] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:18.163 [2024-07-15 13:28:14.846768] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:18.163 [2024-07-15 13:28:14.846784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.846796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:18.163 [2024-07-15 13:28:14.846822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:18.163 [2024-07-15 13:28:14.846833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.860273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.860320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:18.163 [2024-07-15 13:28:14.860353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.415 ms 00:29:18.163 [2024-07-15 13:28:14.860365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.860505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.860523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:18.163 [2024-07-15 13:28:14.860535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:18.163 [2024-07-15 13:28:14.860574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.860649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.860672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:18.163 [2024-07-15 13:28:14.860686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:18.163 [2024-07-15 13:28:14.860701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.861054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.861071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:18.163 [2024-07-15 13:28:14.861084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:29:18.163 [2024-07-15 13:28:14.861105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.861132] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:18.163 [2024-07-15 13:28:14.861160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.861219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:18.163 [2024-07-15 13:28:14.861251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:18.163 [2024-07-15 13:28:14.861263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.870998] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:18.163 [2024-07-15 13:28:14.871252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.871273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:18.163 [2024-07-15 13:28:14.871292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.960 ms 00:29:18.163 [2024-07-15 13:28:14.871320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.874487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.874541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:18.163 [2024-07-15 13:28:14.874573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:29:18.163 [2024-07-15 13:28:14.874595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.874691] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:18.163 [2024-07-15 13:28:14.875396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.875443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:18.163 [2024-07-15 13:28:14.875469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:29:18.163 [2024-07-15 13:28:14.875503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.875565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.875599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:18.163 [2024-07-15 13:28:14.875625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:18.163 [2024-07-15 13:28:14.875648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.875766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:18.163 [2024-07-15 13:28:14.875789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.875800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:18.163 [2024-07-15 13:28:14.875812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:18.163 [2024-07-15 13:28:14.875829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.880664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.880723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:18.163 [2024-07-15 13:28:14.880757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:29:18.163 [2024-07-15 13:28:14.880775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.880859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.163 [2024-07-15 13:28:14.880879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:18.163 [2024-07-15 13:28:14.880892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:18.163 [2024-07-15 13:28:14.880903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.163 [2024-07-15 13:28:14.891564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.240 ms, result 0 00:30:02.672  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (23 MBps) Copying: 74/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (25 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 175/1024 [MB] (25 MBps) Copying: 200/1024 [MB] (25 MBps) Copying: 226/1024 [MB] (25 MBps) Copying: 252/1024 [MB] (26 MBps) Copying: 278/1024 [MB] (26 MBps) Copying: 304/1024 [MB] (25 MBps) Copying: 330/1024 [MB] (25 MBps) Copying: 355/1024 [MB] (25 MBps) Copying: 380/1024 [MB] (25 MBps) Copying: 405/1024 [MB] (25 MBps) Copying: 430/1024 [MB] (24 MBps) Copying: 454/1024 [MB] (24 MBps) Copying: 476/1024 [MB] (21 MBps) Copying: 498/1024 [MB] (22 MBps) Copying: 520/1024 [MB] (21 MBps) Copying: 540/1024 [MB] (20 MBps) Copying: 562/1024 [MB] (21 MBps) Copying: 583/1024 [MB] (21 MBps) Copying: 604/1024 [MB] (21 MBps) Copying: 627/1024 [MB] (22 MBps) Copying: 647/1024 [MB] (20 MBps) Copying: 668/1024 [MB] (20 MBps) Copying: 690/1024 [MB] (21 MBps) Copying: 711/1024 [MB] (21 MBps) Copying: 733/1024 [MB] (22 MBps) Copying: 755/1024 [MB] (21 MBps) Copying: 777/1024 [MB] (21 MBps) Copying: 800/1024 [MB] (22 MBps) Copying: 822/1024 [MB] (22 MBps) Copying: 844/1024 [MB] (21 MBps) Copying: 867/1024 [MB] (22 MBps) Copying: 887/1024 [MB] (20 MBps) Copying: 908/1024 [MB] (20 MBps) Copying: 929/1024 [MB] (21 MBps) Copying: 954/1024 [MB] (24 MBps) Copying: 980/1024 [MB] (25 MBps) Copying: 1005/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-15 13:28:59.155259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.672 [2024-07-15 13:28:59.155350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:02.672 [2024-07-15 13:28:59.155373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:02.672 [2024-07-15 13:28:59.155401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.672 [2024-07-15 13:28:59.155443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:02.672 [2024-07-15 13:28:59.156312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.672 [2024-07-15 13:28:59.156341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:02.672 [2024-07-15 13:28:59.156361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:30:02.672 [2024-07-15 13:28:59.156389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.672 [2024-07-15 13:28:59.156740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.672 [2024-07-15 13:28:59.156772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:02.672 [2024-07-15 13:28:59.156796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:30:02.672 [2024-07-15 13:28:59.156817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.672 [2024-07-15 13:28:59.156875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.672 [2024-07-15 13:28:59.156906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:02.672 [2024-07-15 13:28:59.156932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:02.672 [2024-07-15 13:28:59.156957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.672 [2024-07-15 13:28:59.157061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.672 [2024-07-15 13:28:59.157104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:02.672 [2024-07-15 13:28:59.157127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:02.672 [2024-07-15 13:28:59.157169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.672 [2024-07-15 13:28:59.157232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:02.672 [2024-07-15 13:28:59.157265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:30:02.672 [2024-07-15 13:28:59.157292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:02.672 [2024-07-15 13:28:59.157316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.157992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.158997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.159976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.160000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.160023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.160045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:02.673 [2024-07-15 13:28:59.160066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:02.674 [2024-07-15 13:28:59.160086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:02.674 [2024-07-15 13:28:59.160108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:02.674 [2024-07-15 13:28:59.160140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:02.674 [2024-07-15 13:28:59.160202] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58ef9247-ca11-49fb-b5ae-1b35b173dc4e 00:30:02.674 [2024-07-15 13:28:59.160226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:30:02.674 [2024-07-15 13:28:59.160243] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:30:02.674 [2024-07-15 13:28:59.160262] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:30:02.674 [2024-07-15 13:28:59.160308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:30:02.674 [2024-07-15 13:28:59.160329] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:02.674 [2024-07-15 13:28:59.160352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:02.674 [2024-07-15 13:28:59.160371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:02.674 [2024-07-15 13:28:59.160391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:02.674 [2024-07-15 13:28:59.160409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:02.674 [2024-07-15 13:28:59.160430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.674 [2024-07-15 13:28:59.160452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:02.674 [2024-07-15 13:28:59.160475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:30:02.674 [2024-07-15 13:28:59.160490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.162857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.674 [2024-07-15 13:28:59.162898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:02.674 [2024-07-15 13:28:59.162915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:30:02.674 [2024-07-15 13:28:59.162927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.163476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.674 [2024-07-15 13:28:59.163496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:02.674 [2024-07-15 13:28:59.163510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:30:02.674 [2024-07-15 13:28:59.163533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.171012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.171063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:02.674 [2024-07-15 13:28:59.171098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.171110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.171237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.171255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:02.674 [2024-07-15 13:28:59.171268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.171280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.171351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.171378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:02.674 [2024-07-15 13:28:59.171391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.171402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.171425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.171439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:02.674 [2024-07-15 13:28:59.171451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.171462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.186081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.186172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:02.674 [2024-07-15 13:28:59.186194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.186207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:02.674 [2024-07-15 13:28:59.197392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.197403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:02.674 [2024-07-15 13:28:59.197522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.197534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:02.674 [2024-07-15 13:28:59.197624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.197635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:02.674 [2024-07-15 13:28:59.197768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.197784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:02.674 [2024-07-15 13:28:59.197874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.197885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.197933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.197948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:02.674 [2024-07-15 13:28:59.197972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.198014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.198071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.674 [2024-07-15 13:28:59.198088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:02.674 [2024-07-15 13:28:59.198101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.674 [2024-07-15 13:28:59.198112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.674 [2024-07-15 13:28:59.198300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.991 ms, result 0 00:30:02.932 00:30:02.932 00:30:02.932 13:28:59 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:05.462 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:05.462 Process with pid 95491 is not found 00:30:05.462 Remove shared memory files 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95491 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95491 ']' 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95491 00:30:05.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95491) - No such process 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 95491 is not found' 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_band_md /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_l2p_l1 /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_l2p_l2 /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_l2p_l2_ctx /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_nvc_md /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_p2l_pool /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_sb /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_sb_shm /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_trim_bitmap /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_trim_log /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_trim_md /dev/hugepages/ftl_58ef9247-ca11-49fb-b5ae-1b35b173dc4e_vmap 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:05.462 ************************************ 00:30:05.462 END TEST ftl_restore_fast 00:30:05.462 ************************************ 00:30:05.462 00:30:05.462 real 3m12.478s 00:30:05.462 user 2m58.068s 00:30:05.462 sys 0m16.424s 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:05.462 13:29:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@14 -- # killprocess 88180 00:30:05.462 Process with pid 88180 is not found 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@946 -- # '[' -z 88180 ']' 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@950 -- # kill -0 88180 00:30:05.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (88180) - No such process 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 88180 is not found' 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97426 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97426 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@827 -- # '[' -z 97426 ']' 00:30:05.462 13:29:01 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.462 13:29:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:05.462 [2024-07-15 13:29:01.902601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:05.462 [2024-07-15 13:29:01.902783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97426 ] 00:30:05.462 [2024-07-15 13:29:02.045677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.462 [2024-07-15 13:29:02.138676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.394 13:29:02 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:06.394 13:29:02 ftl -- common/autotest_common.sh@860 -- # return 0 00:30:06.394 13:29:02 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:06.652 nvme0n1 00:30:06.652 13:29:03 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:06.652 13:29:03 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.652 13:29:03 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:06.912 13:29:03 ftl -- ftl/common.sh@28 -- # stores=719d2675-758c-4d74-9248-228f377ae501 00:30:06.912 13:29:03 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:06.912 13:29:03 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 719d2675-758c-4d74-9248-228f377ae501 00:30:06.912 13:29:03 ftl -- ftl/ftl.sh@23 -- # killprocess 97426 00:30:06.912 13:29:03 ftl -- common/autotest_common.sh@946 -- # '[' -z 97426 ']' 00:30:06.912 13:29:03 ftl -- common/autotest_common.sh@950 -- # kill -0 97426 00:30:06.912 13:29:03 ftl -- common/autotest_common.sh@951 -- # uname 00:30:06.912 13:29:03 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:06.912 13:29:03 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97426 00:30:07.170 killing process with pid 97426 00:30:07.170 13:29:03 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:07.170 13:29:03 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:07.170 13:29:03 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97426' 00:30:07.170 13:29:03 ftl -- common/autotest_common.sh@965 -- # kill 97426 00:30:07.170 13:29:03 ftl -- common/autotest_common.sh@970 -- # wait 97426 00:30:07.427 13:29:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:07.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:07.685 Waiting for block devices as requested 00:30:07.685 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:07.943 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:07.943 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:07.943 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.207 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:13.207 13:29:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:13.207 Remove shared memory files 00:30:13.207 13:29:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:13.207 13:29:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:13.207 13:29:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:13.207 13:29:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:13.207 13:29:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:13.207 13:29:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:13.207 ************************************ 00:30:13.207 END TEST ftl 00:30:13.207 ************************************ 00:30:13.207 00:30:13.207 real 13m59.973s 00:30:13.207 user 16m16.267s 00:30:13.207 sys 1m47.116s 00:30:13.207 13:29:09 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:13.207 13:29:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:13.207 13:29:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:13.207 13:29:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:13.207 13:29:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:13.207 13:29:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:13.207 13:29:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:13.207 13:29:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:13.207 13:29:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:13.207 13:29:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:13.207 13:29:09 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:13.207 13:29:09 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:13.207 13:29:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:13.207 13:29:09 -- common/autotest_common.sh@10 -- # set +x 00:30:13.207 13:29:09 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:13.207 13:29:09 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:30:13.207 13:29:09 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:30:13.207 13:29:09 -- common/autotest_common.sh@10 -- # set +x 00:30:14.581 INFO: APP EXITING 00:30:14.581 INFO: killing all VMs 00:30:14.581 INFO: killing vhost app 00:30:14.581 INFO: EXIT DONE 00:30:15.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:15.406 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:15.406 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:15.406 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:15.406 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:15.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:16.230 Cleaning 00:30:16.230 Removing: /var/run/dpdk/spdk0/config 00:30:16.230 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:16.230 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:16.230 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:16.230 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:16.230 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:16.230 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:16.230 Removing: /var/run/dpdk/spdk0 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74019 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74180 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74374 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74461 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74490 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74609 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74627 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74785 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74857 00:30:16.230 Removing: /var/run/dpdk/spdk_pid74934 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75025 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75099 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75138 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75178 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75235 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75352 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75793 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75846 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75903 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75920 00:30:16.230 Removing: /var/run/dpdk/spdk_pid75989 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76005 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76074 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76100 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76143 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76161 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76209 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76227 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76362 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76393 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76474 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76528 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76553 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76615 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76656 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76687 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76727 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76761 00:30:16.230 Removing: /var/run/dpdk/spdk_pid76798 00:30:16.489 Removing: /var/run/dpdk/spdk_pid76833 00:30:16.489 Removing: /var/run/dpdk/spdk_pid76869 00:30:16.489 Removing: /var/run/dpdk/spdk_pid76910 00:30:16.489 Removing: /var/run/dpdk/spdk_pid76940 00:30:16.489 Removing: /var/run/dpdk/spdk_pid76981 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77011 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77052 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77088 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77123 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77159 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77194 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77233 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77277 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77311 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77349 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77420 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77514 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77659 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77732 00:30:16.489 Removing: /var/run/dpdk/spdk_pid77763 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78213 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78300 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78404 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78446 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78470 00:30:16.489 Removing: /var/run/dpdk/spdk_pid78542 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79156 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79187 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79683 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79770 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79874 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79922 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79942 00:30:16.489 Removing: /var/run/dpdk/spdk_pid79973 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81793 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81918 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81923 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81941 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81981 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81985 00:30:16.489 Removing: /var/run/dpdk/spdk_pid81997 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82042 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82046 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82058 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82103 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82107 00:30:16.489 Removing: /var/run/dpdk/spdk_pid82119 00:30:16.489 Removing: /var/run/dpdk/spdk_pid83459 00:30:16.489 Removing: /var/run/dpdk/spdk_pid83547 00:30:16.489 Removing: /var/run/dpdk/spdk_pid84427 00:30:16.489 Removing: /var/run/dpdk/spdk_pid84775 00:30:16.489 Removing: /var/run/dpdk/spdk_pid84866 00:30:16.489 Removing: /var/run/dpdk/spdk_pid84960 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85053 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85163 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85233 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85366 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85627 00:30:16.489 Removing: /var/run/dpdk/spdk_pid85658 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86113 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86286 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86374 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86480 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86517 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86542 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86836 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86875 00:30:16.489 Removing: /var/run/dpdk/spdk_pid86927 00:30:16.489 Removing: /var/run/dpdk/spdk_pid87267 00:30:16.489 Removing: /var/run/dpdk/spdk_pid87410 00:30:16.489 Removing: /var/run/dpdk/spdk_pid88180 00:30:16.489 Removing: /var/run/dpdk/spdk_pid88293 00:30:16.489 Removing: /var/run/dpdk/spdk_pid88464 00:30:16.489 Removing: /var/run/dpdk/spdk_pid88556 00:30:16.489 Removing: /var/run/dpdk/spdk_pid88896 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89158 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89501 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89684 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89800 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89846 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89974 00:30:16.489 Removing: /var/run/dpdk/spdk_pid89988 00:30:16.489 Removing: /var/run/dpdk/spdk_pid90030 00:30:16.489 Removing: /var/run/dpdk/spdk_pid90220 00:30:16.489 Removing: /var/run/dpdk/spdk_pid90439 00:30:16.489 Removing: /var/run/dpdk/spdk_pid90842 00:30:16.489 Removing: /var/run/dpdk/spdk_pid91295 00:30:16.489 Removing: /var/run/dpdk/spdk_pid91706 00:30:16.748 Removing: /var/run/dpdk/spdk_pid92198 00:30:16.748 Removing: /var/run/dpdk/spdk_pid92339 00:30:16.748 Removing: /var/run/dpdk/spdk_pid92436 00:30:16.748 Removing: /var/run/dpdk/spdk_pid93096 00:30:16.748 Removing: /var/run/dpdk/spdk_pid93170 00:30:16.748 Removing: /var/run/dpdk/spdk_pid93613 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94023 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94515 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94632 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94671 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94730 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94787 00:30:16.748 Removing: /var/run/dpdk/spdk_pid94844 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95034 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95104 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95170 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95243 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95275 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95353 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95491 00:30:16.748 Removing: /var/run/dpdk/spdk_pid95710 00:30:16.748 Removing: /var/run/dpdk/spdk_pid96108 00:30:16.748 Removing: /var/run/dpdk/spdk_pid96525 00:30:16.748 Removing: /var/run/dpdk/spdk_pid96942 00:30:16.748 Removing: /var/run/dpdk/spdk_pid97426 00:30:16.748 Clean 00:30:16.748 13:29:13 -- common/autotest_common.sh@1447 -- # return 0 00:30:16.748 13:29:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:16.748 13:29:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.748 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 13:29:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:16.748 13:29:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.748 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 13:29:13 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:16.748 13:29:13 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:16.748 13:29:13 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:16.748 13:29:13 -- spdk/autotest.sh@391 -- # hash lcov 00:30:16.748 13:29:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:16.748 13:29:13 -- spdk/autotest.sh@393 -- # hostname 00:30:16.748 13:29:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:17.006 geninfo: WARNING: invalid characters removed from testname! 00:30:43.684 13:29:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:44.620 13:29:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.904 13:29:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:50.433 13:29:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:53.724 13:29:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:55.642 13:29:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:58.925 13:29:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:58.925 13:29:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:58.925 13:29:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:58.925 13:29:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.925 13:29:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.925 13:29:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.925 13:29:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.925 13:29:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.925 13:29:55 -- paths/export.sh@5 -- $ export PATH 00:30:58.925 13:29:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.925 13:29:55 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:58.925 13:29:55 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:58.925 13:29:55 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721050195.XXXXXX 00:30:58.925 13:29:55 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721050195.GNnUqF 00:30:58.925 13:29:55 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:58.925 13:29:55 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:30:58.925 13:29:55 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:58.925 13:29:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:58.925 13:29:55 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:58.925 13:29:55 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:58.925 13:29:55 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:58.925 13:29:55 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:30:58.925 13:29:55 -- common/autotest_common.sh@10 -- $ set +x 00:30:58.925 13:29:55 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:58.925 13:29:55 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:58.925 13:29:55 -- pm/common@17 -- $ local monitor 00:30:58.925 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:58.925 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:58.925 13:29:55 -- pm/common@25 -- $ sleep 1 00:30:58.925 13:29:55 -- pm/common@21 -- $ date +%s 00:30:58.925 13:29:55 -- pm/common@21 -- $ date +%s 00:30:58.925 13:29:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721050195 00:30:58.925 13:29:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721050195 00:30:58.925 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721050195_collect-vmstat.pm.log 00:30:58.925 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721050195_collect-cpu-load.pm.log 00:30:59.490 13:29:56 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:59.490 13:29:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:59.490 13:29:56 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:59.490 13:29:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:59.490 13:29:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:59.490 13:29:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:59.490 13:29:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:59.490 13:29:56 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:59.490 13:29:56 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:59.490 13:29:56 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:59.748 13:29:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:59.748 13:29:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:59.748 13:29:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:59.748 13:29:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:59.748 13:29:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:59.748 13:29:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:59.748 13:29:56 -- pm/common@44 -- $ pid=99134 00:30:59.748 13:29:56 -- pm/common@50 -- $ kill -TERM 99134 00:30:59.748 13:29:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:59.748 13:29:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:59.748 13:29:56 -- pm/common@44 -- $ pid=99135 00:30:59.748 13:29:56 -- pm/common@50 -- $ kill -TERM 99135 00:30:59.748 + [[ -n 6043 ]] 00:30:59.748 + sudo kill 6043 00:30:59.758 [Pipeline] } 00:30:59.779 [Pipeline] // timeout 00:30:59.784 [Pipeline] } 00:30:59.803 [Pipeline] // stage 00:30:59.809 [Pipeline] } 00:30:59.828 [Pipeline] // catchError 00:30:59.838 [Pipeline] stage 00:30:59.840 [Pipeline] { (Stop VM) 00:30:59.857 [Pipeline] sh 00:31:00.136 + vagrant halt 00:31:03.426 ==> default: Halting domain... 00:31:10.005 [Pipeline] sh 00:31:10.286 + vagrant destroy -f 00:31:13.571 ==> default: Removing domain... 00:31:14.622 [Pipeline] sh 00:31:14.900 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:31:14.911 [Pipeline] } 00:31:14.930 [Pipeline] // stage 00:31:14.935 [Pipeline] } 00:31:14.952 [Pipeline] // dir 00:31:14.957 [Pipeline] } 00:31:14.974 [Pipeline] // wrap 00:31:14.980 [Pipeline] } 00:31:14.995 [Pipeline] // catchError 00:31:15.005 [Pipeline] stage 00:31:15.007 [Pipeline] { (Epilogue) 00:31:15.021 [Pipeline] sh 00:31:15.299 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:21.869 [Pipeline] catchError 00:31:21.871 [Pipeline] { 00:31:21.886 [Pipeline] sh 00:31:22.160 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:22.160 Artifacts sizes are good 00:31:22.168 [Pipeline] } 00:31:22.184 [Pipeline] // catchError 00:31:22.196 [Pipeline] archiveArtifacts 00:31:22.202 Archiving artifacts 00:31:22.368 [Pipeline] cleanWs 00:31:22.386 [WS-CLEANUP] Deleting project workspace... 00:31:22.386 [WS-CLEANUP] Deferred wipeout is used... 00:31:22.392 [WS-CLEANUP] done 00:31:22.394 [Pipeline] } 00:31:22.412 [Pipeline] // stage 00:31:22.418 [Pipeline] } 00:31:22.433 [Pipeline] // node 00:31:22.437 [Pipeline] End of Pipeline 00:31:22.475 Finished: SUCCESS